Test Report: QEMU_macOS 19302

                    
                      686e9da65a2d4195f8e8610efbc417c3b07d1722:2024-07-18:35410
                    
                

Test fail (96/275)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.71
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.87
55 TestCertOptions 9.98
56 TestCertExpiration 195.28
57 TestDockerFlags 10.04
58 TestForceSystemdFlag 10.02
59 TestForceSystemdEnv 10.79
104 TestFunctional/parallel/ServiceCmdConnect 36.79
169 TestMultiControlPlane/serial/StartCluster 227.24
170 TestMultiControlPlane/serial/DeployApp 703.04
171 TestMultiControlPlane/serial/PingHostFromPods 1.47
172 TestMultiControlPlane/serial/AddWorkerNode 51.07
175 TestMultiControlPlane/serial/CopyFile 1.11
176 TestMultiControlPlane/serial/StopSecondaryNode 214.12
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 103.78
178 TestMultiControlPlane/serial/RestartSecondaryNode 208.28
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 237.21
183 TestImageBuild/serial/Setup 10.05
186 TestJSONOutput/start/Command 9.84
192 TestJSONOutput/pause/Command 0.08
198 TestJSONOutput/unpause/Command 0.05
215 TestMinikubeProfile 10.14
218 TestMountStart/serial/StartWithMountFirst 10.05
221 TestMultiNode/serial/FreshStart2Nodes 9.82
222 TestMultiNode/serial/DeployApp2Nodes 112.12
223 TestMultiNode/serial/PingHostFrom2Pods 0.08
224 TestMultiNode/serial/AddNode 0.07
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.08
227 TestMultiNode/serial/CopyFile 0.06
228 TestMultiNode/serial/StopNode 0.13
229 TestMultiNode/serial/StartAfterStop 59.33
230 TestMultiNode/serial/RestartKeepsNodes 8.62
231 TestMultiNode/serial/DeleteNode 0.1
232 TestMultiNode/serial/StopMultiNode 3.09
233 TestMultiNode/serial/RestartMultiNode 5.25
234 TestMultiNode/serial/ValidateNameConflict 19.86
238 TestPreload 9.92
240 TestScheduledStopUnix 9.96
241 TestSkaffold 12.38
244 TestRunningBinaryUpgrade 613.35
246 TestKubernetesUpgrade 17.25
259 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.77
260 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.37
262 TestStoppedBinaryUpgrade/Upgrade 575.58
264 TestPause/serial/Start 10.04
274 TestNoKubernetes/serial/StartWithK8s 9.96
275 TestNoKubernetes/serial/StartWithStopK8s 5.31
276 TestNoKubernetes/serial/Start 5.29
280 TestNoKubernetes/serial/StartNoArgs 5.34
282 TestNetworkPlugins/group/auto/Start 9.78
283 TestNetworkPlugins/group/kindnet/Start 9.8
284 TestNetworkPlugins/group/calico/Start 10.08
285 TestNetworkPlugins/group/custom-flannel/Start 9.85
286 TestNetworkPlugins/group/false/Start 9.71
287 TestNetworkPlugins/group/enable-default-cni/Start 9.69
288 TestNetworkPlugins/group/flannel/Start 9.9
289 TestNetworkPlugins/group/bridge/Start 9.8
291 TestNetworkPlugins/group/kubenet/Start 9.89
293 TestStartStop/group/old-k8s-version/serial/FirstStart 10
294 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
298 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
300 TestStartStop/group/no-preload/serial/FirstStart 10.2
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
304 TestStartStop/group/old-k8s-version/serial/Pause 0.1
306 TestStartStop/group/embed-certs/serial/FirstStart 9.88
307 TestStartStop/group/no-preload/serial/DeployApp 0.09
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
311 TestStartStop/group/no-preload/serial/SecondStart 6.18
312 TestStartStop/group/embed-certs/serial/DeployApp 0.09
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
316 TestStartStop/group/embed-certs/serial/SecondStart 5.26
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
320 TestStartStop/group/no-preload/serial/Pause 0.1
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.93
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
326 TestStartStop/group/embed-certs/serial/Pause 0.1
328 TestStartStop/group/newest-cni/serial/FirstStart 9.84
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.25
338 TestStartStop/group/newest-cni/serial/SecondStart 5.25
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
340 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
341 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
342 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
346 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (10.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-065000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-065000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (10.705377958s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d42bedf0-53e7-475f-a9ca-10fc2fe9ff95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-065000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"94e8a842-8a88-4989-8732-41b532babbce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"973fbe26-32b6-4c2e-89ba-ad26184f8bd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig"}}
	{"specversion":"1.0","id":"a357b2e6-2c75-46c6-a6c1-946119da6757","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8e08ab2f-24d5-4af6-9d12-ae7be00ebdec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9c6ca992-f108-4d39-99e6-51f80f106e4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube"}}
	{"specversion":"1.0","id":"a96d82d5-d656-4d16-b906-bc8fa1dd0e60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"e9e9a093-94a4-4aa3-b780-2fd02f37606a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bad6ea10-5a18-4c73-96f3-d39ecab18428","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"fbad7b7f-ce18-45d2-8f27-16873adc333b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2406c90e-97b6-4c89-9fa6-04fda6b330d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-065000\" primary control-plane node in \"download-only-065000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d5b0fcf3-9542-4740-aede-1fb07609c0d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d1fb77f-935f-472b-9e1a-7cbe436c5c8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106475a60 0x106475a60 0x106475a60 0x106475a60 0x106475a60 0x106475a60 0x106475a60] Decompressors:map[bz2:0x140006374e0 gz:0x140006374e8 tar:0x14000637490 tar.bz2:0x140006374a0 tar.gz:0x140006374b0 tar.xz:0x140006374c0 tar.zst:0x140006374d0 tbz2:0x140006374a0 tgz:0x14
0006374b0 txz:0x140006374c0 tzst:0x140006374d0 xz:0x140006374f0 zip:0x14000637500 zst:0x140006374f8] Getters:map[file:0x1400171a630 http:0x1400077a870 https:0x1400077a960] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"337cf2d0-2fe6-4b01-a33b-3d5420a3360c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:24:50.066480    1714 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:24:50.066633    1714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:24:50.066636    1714 out.go:304] Setting ErrFile to fd 2...
	I0718 20:24:50.066639    1714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:24:50.066798    1714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	W0718 20:24:50.066867    1714 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19302-1213/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19302-1213/.minikube/config/config.json: no such file or directory
	I0718 20:24:50.068104    1714 out.go:298] Setting JSON to true
	I0718 20:24:50.085380    1714 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1458,"bootTime":1721358032,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:24:50.085447    1714 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:24:50.091012    1714 out.go:97] [download-only-065000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 20:24:50.091122    1714 notify.go:220] Checking for updates...
	W0718 20:24:50.091140    1714 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball: no such file or directory
	I0718 20:24:50.094016    1714 out.go:169] MINIKUBE_LOCATION=19302
	I0718 20:24:50.096983    1714 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:24:50.102032    1714 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:24:50.105070    1714 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:24:50.108059    1714 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	W0718 20:24:50.114053    1714 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0718 20:24:50.114322    1714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:24:50.118887    1714 out.go:97] Using the qemu2 driver based on user configuration
	I0718 20:24:50.118906    1714 start.go:297] selected driver: qemu2
	I0718 20:24:50.118920    1714 start.go:901] validating driver "qemu2" against <nil>
	I0718 20:24:50.118987    1714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:24:50.121982    1714 out.go:169] Automatically selected the socket_vmnet network
	I0718 20:24:50.127692    1714 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0718 20:24:50.127808    1714 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 20:24:50.127860    1714 cni.go:84] Creating CNI manager for ""
	I0718 20:24:50.127877    1714 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0718 20:24:50.127933    1714 start.go:340] cluster config:
	{Name:download-only-065000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:24:50.133005    1714 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:24:50.137969    1714 out.go:97] Downloading VM boot image ...
	I0718 20:24:50.137982    1714 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso
	I0718 20:24:54.528665    1714 out.go:97] Starting "download-only-065000" primary control-plane node in "download-only-065000" cluster
	I0718 20:24:54.528684    1714 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 20:24:54.587890    1714 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0718 20:24:54.587912    1714 cache.go:56] Caching tarball of preloaded images
	I0718 20:24:54.588100    1714 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 20:24:54.593147    1714 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0718 20:24:54.593157    1714 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:24:54.678832    1714 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0718 20:24:59.602784    1714 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:24:59.602927    1714 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:25:00.298556    1714 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0718 20:25:00.298734    1714 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/download-only-065000/config.json ...
	I0718 20:25:00.298763    1714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/download-only-065000/config.json: {Name:mk1a7ebf572962433798bc760647481d0d78e6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:25:00.298986    1714 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 20:25:00.299253    1714 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0718 20:25:00.698645    1714 out.go:169] 
	W0718 20:25:00.704691    1714 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106475a60 0x106475a60 0x106475a60 0x106475a60 0x106475a60 0x106475a60 0x106475a60] Decompressors:map[bz2:0x140006374e0 gz:0x140006374e8 tar:0x14000637490 tar.bz2:0x140006374a0 tar.gz:0x140006374b0 tar.xz:0x140006374c0 tar.zst:0x140006374d0 tbz2:0x140006374a0 tgz:0x140006374b0 txz:0x140006374c0 tzst:0x140006374d0 xz:0x140006374f0 zip:0x14000637500 zst:0x140006374f8] Getters:map[file:0x1400171a630 http:0x1400077a870 https:0x1400077a960] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0718 20:25:00.704719    1714 out_reason.go:110] 
	W0718 20:25:00.711653    1714 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 20:25:00.715605    1714 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-065000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (10.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-285000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-285000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.721345834s)

                                                
                                                
-- stdout --
	* [offline-docker-285000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-285000" primary control-plane node in "offline-docker-285000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-285000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:11:33.777117    6212 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:11:33.777238    6212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:11:33.777242    6212 out.go:304] Setting ErrFile to fd 2...
	I0718 21:11:33.777244    6212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:11:33.777386    6212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:11:33.778622    6212 out.go:298] Setting JSON to false
	I0718 21:11:33.796119    6212 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4261,"bootTime":1721358032,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:11:33.796189    6212 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:11:33.801412    6212 out.go:177] * [offline-docker-285000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:11:33.808292    6212 notify.go:220] Checking for updates...
	I0718 21:11:33.812270    6212 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:11:33.815319    6212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:11:33.818267    6212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:11:33.821341    6212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:11:33.824384    6212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:11:33.827356    6212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:11:33.830746    6212 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:11:33.830806    6212 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:11:33.835277    6212 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:11:33.842394    6212 start.go:297] selected driver: qemu2
	I0718 21:11:33.842405    6212 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:11:33.842413    6212 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:11:33.844363    6212 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:11:33.847414    6212 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:11:33.850318    6212 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:11:33.850349    6212 cni.go:84] Creating CNI manager for ""
	I0718 21:11:33.850359    6212 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:11:33.850363    6212 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:11:33.850406    6212 start.go:340] cluster config:
	{Name:offline-docker-285000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-285000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:11:33.854068    6212 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:33.861309    6212 out.go:177] * Starting "offline-docker-285000" primary control-plane node in "offline-docker-285000" cluster
	I0718 21:11:33.865264    6212 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:11:33.865293    6212 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:11:33.865305    6212 cache.go:56] Caching tarball of preloaded images
	I0718 21:11:33.865376    6212 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:11:33.865380    6212 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:11:33.865449    6212 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/offline-docker-285000/config.json ...
	I0718 21:11:33.865461    6212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/offline-docker-285000/config.json: {Name:mk45b986b36c88b0b34f19e0b24a869e8c977cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:11:33.865734    6212 start.go:360] acquireMachinesLock for offline-docker-285000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:11:33.865766    6212 start.go:364] duration metric: took 25.917µs to acquireMachinesLock for "offline-docker-285000"
	I0718 21:11:33.865776    6212 start.go:93] Provisioning new machine with config: &{Name:offline-docker-285000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-285000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:11:33.865802    6212 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:11:33.870333    6212 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0718 21:11:33.886387    6212 start.go:159] libmachine.API.Create for "offline-docker-285000" (driver="qemu2")
	I0718 21:11:33.886416    6212 client.go:168] LocalClient.Create starting
	I0718 21:11:33.886487    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:11:33.886531    6212 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:33.886540    6212 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:33.886581    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:11:33.886603    6212 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:33.886613    6212 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:33.886979    6212 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:11:34.017385    6212 main.go:141] libmachine: Creating SSH key...
	I0718 21:11:34.091615    6212 main.go:141] libmachine: Creating Disk image...
	I0718 21:11:34.091624    6212 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:11:34.091804    6212 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2
	I0718 21:11:34.101574    6212 main.go:141] libmachine: STDOUT: 
	I0718 21:11:34.101595    6212 main.go:141] libmachine: STDERR: 
	I0718 21:11:34.101666    6212 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2 +20000M
	I0718 21:11:34.111065    6212 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:11:34.111081    6212 main.go:141] libmachine: STDERR: 
	I0718 21:11:34.111104    6212 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2
	I0718 21:11:34.111108    6212 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:11:34.111121    6212 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:11:34.111157    6212 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:90:d6:d0:44:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2
	I0718 21:11:34.112952    6212 main.go:141] libmachine: STDOUT: 
	I0718 21:11:34.112967    6212 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:11:34.112986    6212 client.go:171] duration metric: took 226.573209ms to LocalClient.Create
	I0718 21:11:36.114993    6212 start.go:128] duration metric: took 2.249249625s to createHost
	I0718 21:11:36.115010    6212 start.go:83] releasing machines lock for "offline-docker-285000", held for 2.249304833s
	W0718 21:11:36.115024    6212 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:36.119178    6212 out.go:177] * Deleting "offline-docker-285000" in qemu2 ...
	W0718 21:11:36.133622    6212 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:36.133633    6212 start.go:729] Will try again in 5 seconds ...
	I0718 21:11:41.135592    6212 start.go:360] acquireMachinesLock for offline-docker-285000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:11:41.135721    6212 start.go:364] duration metric: took 92.292µs to acquireMachinesLock for "offline-docker-285000"
	I0718 21:11:41.135753    6212 start.go:93] Provisioning new machine with config: &{Name:offline-docker-285000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-285000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:11:41.135793    6212 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:11:41.149223    6212 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0718 21:11:41.166280    6212 start.go:159] libmachine.API.Create for "offline-docker-285000" (driver="qemu2")
	I0718 21:11:41.166306    6212 client.go:168] LocalClient.Create starting
	I0718 21:11:41.166374    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:11:41.166408    6212 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:41.166415    6212 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:41.166447    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:11:41.166470    6212 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:41.166478    6212 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:41.166743    6212 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:11:41.296485    6212 main.go:141] libmachine: Creating SSH key...
	I0718 21:11:41.404520    6212 main.go:141] libmachine: Creating Disk image...
	I0718 21:11:41.404525    6212 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:11:41.404698    6212 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2
	I0718 21:11:41.414168    6212 main.go:141] libmachine: STDOUT: 
	I0718 21:11:41.414187    6212 main.go:141] libmachine: STDERR: 
	I0718 21:11:41.414245    6212 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2 +20000M
	I0718 21:11:41.422007    6212 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:11:41.422020    6212 main.go:141] libmachine: STDERR: 
	I0718 21:11:41.422036    6212 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2
	I0718 21:11:41.422040    6212 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:11:41.422052    6212 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:11:41.422075    6212 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:27:8e:6b:72:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/offline-docker-285000/disk.qcow2
	I0718 21:11:41.423623    6212 main.go:141] libmachine: STDOUT: 
	I0718 21:11:41.423635    6212 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:11:41.423649    6212 client.go:171] duration metric: took 257.346791ms to LocalClient.Create
	I0718 21:11:43.425804    6212 start.go:128] duration metric: took 2.290047125s to createHost
	I0718 21:11:43.425871    6212 start.go:83] releasing machines lock for "offline-docker-285000", held for 2.290200625s
	W0718 21:11:43.426232    6212 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-285000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-285000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:43.435866    6212 out.go:177] 
	W0718 21:11:43.443052    6212 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:11:43.443127    6212 out.go:239] * 
	* 
	W0718 21:11:43.445562    6212 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:11:43.455942    6212 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-285000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-18 21:11:43.472387 -0700 PDT m=+2813.560410334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-285000 -n offline-docker-285000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-285000 -n offline-docker-285000: exit status 7 (67.315041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-285000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-285000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-285000
--- FAIL: TestOffline (9.87s)

                                                
                                    
x
+
TestCertOptions (9.98s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-935000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-935000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.726694583s)

                                                
                                                
-- stdout --
	* [cert-options-935000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-935000" primary control-plane node in "cert-options-935000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-935000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-935000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-935000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-935000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.7255ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-935000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-935000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-935000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-935000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-935000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-935000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.382041ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-935000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-935000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-935000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-935000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-935000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-18 21:12:14.323335 -0700 PDT m=+2844.412251001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-935000 -n cert-options-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-935000 -n cert-options-935000: exit status 7 (28.897458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-935000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-935000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-935000
--- FAIL: TestCertOptions (9.98s)

                                                
                                    
x
+
TestCertExpiration (195.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-240000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-240000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.920796s)

                                                
                                                
-- stdout --
	* [cert-expiration-240000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-240000" primary control-plane node in "cert-expiration-240000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-240000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-240000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-240000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-240000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
E0718 21:15:12.959572    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-240000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.2210175s)

                                                
                                                
-- stdout --
	* [cert-expiration-240000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-240000" primary control-plane node in "cert-expiration-240000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-240000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-240000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-240000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-240000" primary control-plane node in "cert-expiration-240000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-240000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-18 21:15:14.597091 -0700 PDT m=+3024.691227751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-240000 -n cert-expiration-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-240000 -n cert-expiration-240000: exit status 7 (64.553625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-240000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-240000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-240000
--- FAIL: TestCertExpiration (195.28s)

                                                
                                    
x
+
TestDockerFlags (10.04s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-199000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-199000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.817915459s)

                                                
                                                
-- stdout --
	* [docker-flags-199000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-199000" primary control-plane node in "docker-flags-199000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-199000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:11:54.431847    6405 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:11:54.431984    6405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:11:54.431987    6405 out.go:304] Setting ErrFile to fd 2...
	I0718 21:11:54.431990    6405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:11:54.432110    6405 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:11:54.433159    6405 out.go:298] Setting JSON to false
	I0718 21:11:54.448979    6405 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4282,"bootTime":1721358032,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:11:54.449051    6405 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:11:54.454490    6405 out.go:177] * [docker-flags-199000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:11:54.460333    6405 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:11:54.460396    6405 notify.go:220] Checking for updates...
	I0718 21:11:54.467289    6405 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:11:54.470294    6405 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:11:54.473259    6405 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:11:54.476288    6405 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:11:54.479328    6405 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:11:54.482756    6405 config.go:182] Loaded profile config "force-systemd-flag-439000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:11:54.482825    6405 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:11:54.482884    6405 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:11:54.487212    6405 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:11:54.494304    6405 start.go:297] selected driver: qemu2
	I0718 21:11:54.494312    6405 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:11:54.494319    6405 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:11:54.496424    6405 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:11:54.499253    6405 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:11:54.502429    6405 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0718 21:11:54.502455    6405 cni.go:84] Creating CNI manager for ""
	I0718 21:11:54.502461    6405 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:11:54.502471    6405 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:11:54.502499    6405 start.go:340] cluster config:
	{Name:docker-flags-199000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-199000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:11:54.506013    6405 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:54.513269    6405 out.go:177] * Starting "docker-flags-199000" primary control-plane node in "docker-flags-199000" cluster
	I0718 21:11:54.517125    6405 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:11:54.517140    6405 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:11:54.517152    6405 cache.go:56] Caching tarball of preloaded images
	I0718 21:11:54.517219    6405 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:11:54.517224    6405 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:11:54.517286    6405 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/docker-flags-199000/config.json ...
	I0718 21:11:54.517299    6405 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/docker-flags-199000/config.json: {Name:mk91fc88252b605572542c0d907dddd1f635a90e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:11:54.517514    6405 start.go:360] acquireMachinesLock for docker-flags-199000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:11:54.517548    6405 start.go:364] duration metric: took 27.042µs to acquireMachinesLock for "docker-flags-199000"
	I0718 21:11:54.517559    6405 start.go:93] Provisioning new machine with config: &{Name:docker-flags-199000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-199000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:11:54.517585    6405 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:11:54.526128    6405 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0718 21:11:54.543789    6405 start.go:159] libmachine.API.Create for "docker-flags-199000" (driver="qemu2")
	I0718 21:11:54.543814    6405 client.go:168] LocalClient.Create starting
	I0718 21:11:54.543869    6405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:11:54.543904    6405 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:54.543913    6405 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:54.543951    6405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:11:54.543975    6405 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:54.543981    6405 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:54.544366    6405 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:11:54.673459    6405 main.go:141] libmachine: Creating SSH key...
	I0718 21:11:54.748021    6405 main.go:141] libmachine: Creating Disk image...
	I0718 21:11:54.748026    6405 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:11:54.748204    6405 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2
	I0718 21:11:54.757339    6405 main.go:141] libmachine: STDOUT: 
	I0718 21:11:54.757359    6405 main.go:141] libmachine: STDERR: 
	I0718 21:11:54.757405    6405 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2 +20000M
	I0718 21:11:54.765276    6405 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:11:54.765292    6405 main.go:141] libmachine: STDERR: 
	I0718 21:11:54.765307    6405 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2
	I0718 21:11:54.765312    6405 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:11:54.765325    6405 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:11:54.765353    6405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:c0:2d:69:f2:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2
	I0718 21:11:54.767014    6405 main.go:141] libmachine: STDOUT: 
	I0718 21:11:54.767029    6405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:11:54.767047    6405 client.go:171] duration metric: took 223.2355ms to LocalClient.Create
	I0718 21:11:56.769174    6405 start.go:128] duration metric: took 2.251633833s to createHost
	I0718 21:11:56.769234    6405 start.go:83] releasing machines lock for "docker-flags-199000", held for 2.251742125s
	W0718 21:11:56.769321    6405 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:56.779446    6405 out.go:177] * Deleting "docker-flags-199000" in qemu2 ...
	W0718 21:11:56.803416    6405 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:56.803447    6405 start.go:729] Will try again in 5 seconds ...
	I0718 21:12:01.805473    6405 start.go:360] acquireMachinesLock for docker-flags-199000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:12:01.882990    6405 start.go:364] duration metric: took 77.406708ms to acquireMachinesLock for "docker-flags-199000"
	I0718 21:12:01.883125    6405 start.go:93] Provisioning new machine with config: &{Name:docker-flags-199000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-199000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:12:01.883420    6405 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:12:01.890512    6405 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0718 21:12:01.942984    6405 start.go:159] libmachine.API.Create for "docker-flags-199000" (driver="qemu2")
	I0718 21:12:01.943033    6405 client.go:168] LocalClient.Create starting
	I0718 21:12:01.943158    6405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:12:01.943216    6405 main.go:141] libmachine: Decoding PEM data...
	I0718 21:12:01.943236    6405 main.go:141] libmachine: Parsing certificate...
	I0718 21:12:01.943296    6405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:12:01.943340    6405 main.go:141] libmachine: Decoding PEM data...
	I0718 21:12:01.943354    6405 main.go:141] libmachine: Parsing certificate...
	I0718 21:12:01.943933    6405 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:12:02.094764    6405 main.go:141] libmachine: Creating SSH key...
	I0718 21:12:02.149817    6405 main.go:141] libmachine: Creating Disk image...
	I0718 21:12:02.149822    6405 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:12:02.150010    6405 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2
	I0718 21:12:02.159401    6405 main.go:141] libmachine: STDOUT: 
	I0718 21:12:02.159419    6405 main.go:141] libmachine: STDERR: 
	I0718 21:12:02.159478    6405 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2 +20000M
	I0718 21:12:02.167304    6405 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:12:02.167326    6405 main.go:141] libmachine: STDERR: 
	I0718 21:12:02.167337    6405 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2
	I0718 21:12:02.167342    6405 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:12:02.167358    6405 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:12:02.167385    6405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0f:08:dd:1b:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/docker-flags-199000/disk.qcow2
	I0718 21:12:02.168991    6405 main.go:141] libmachine: STDOUT: 
	I0718 21:12:02.169005    6405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:12:02.169018    6405 client.go:171] duration metric: took 225.985875ms to LocalClient.Create
	I0718 21:12:04.171143    6405 start.go:128] duration metric: took 2.287762s to createHost
	I0718 21:12:04.171188    6405 start.go:83] releasing machines lock for "docker-flags-199000", held for 2.288207s
	W0718 21:12:04.171570    6405 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-199000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-199000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:12:04.185196    6405 out.go:177] 
	W0718 21:12:04.192468    6405 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:12:04.192491    6405 out.go:239] * 
	* 
	W0718 21:12:04.194787    6405 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:12:04.208100    6405 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-199000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-199000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-199000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.0105ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-199000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-199000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-199000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-199000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-199000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-199000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-199000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-199000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-199000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.6795ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-199000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-199000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-199000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-199000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-199000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-199000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-18 21:12:04.345011 -0700 PDT m=+2834.433638334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-199000 -n docker-flags-199000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-199000 -n docker-flags-199000: exit status 7 (28.479334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-199000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-199000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-199000
--- FAIL: TestDockerFlags (10.04s)

                                                
                                    
x
+
TestForceSystemdFlag (10.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-439000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-439000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.832285166s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-439000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-439000" primary control-plane node in "force-systemd-flag-439000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-439000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:11:49.461418    6384 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:11:49.461548    6384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:11:49.461552    6384 out.go:304] Setting ErrFile to fd 2...
	I0718 21:11:49.461555    6384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:11:49.461675    6384 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:11:49.462711    6384 out.go:298] Setting JSON to false
	I0718 21:11:49.478583    6384 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4277,"bootTime":1721358032,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:11:49.478651    6384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:11:49.483606    6384 out.go:177] * [force-systemd-flag-439000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:11:49.490656    6384 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:11:49.490705    6384 notify.go:220] Checking for updates...
	I0718 21:11:49.498642    6384 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:11:49.501522    6384 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:11:49.504629    6384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:11:49.507616    6384 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:11:49.508894    6384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:11:49.514393    6384 config.go:182] Loaded profile config "force-systemd-env-598000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:11:49.514466    6384 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:11:49.514513    6384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:11:49.518639    6384 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:11:49.523538    6384 start.go:297] selected driver: qemu2
	I0718 21:11:49.523543    6384 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:11:49.523549    6384 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:11:49.525735    6384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:11:49.528588    6384 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:11:49.531741    6384 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 21:11:49.531756    6384 cni.go:84] Creating CNI manager for ""
	I0718 21:11:49.531762    6384 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:11:49.531767    6384 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:11:49.531798    6384 start.go:340] cluster config:
	{Name:force-systemd-flag-439000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:11:49.535356    6384 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:49.542618    6384 out.go:177] * Starting "force-systemd-flag-439000" primary control-plane node in "force-systemd-flag-439000" cluster
	I0718 21:11:49.546539    6384 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:11:49.546555    6384 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:11:49.546566    6384 cache.go:56] Caching tarball of preloaded images
	I0718 21:11:49.546622    6384 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:11:49.546628    6384 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:11:49.546685    6384 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/force-systemd-flag-439000/config.json ...
	I0718 21:11:49.546697    6384 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/force-systemd-flag-439000/config.json: {Name:mk8703c4c69156d27c2e4e6e841bc4ffd42de82e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:11:49.546902    6384 start.go:360] acquireMachinesLock for force-systemd-flag-439000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:11:49.546936    6384 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "force-systemd-flag-439000"
	I0718 21:11:49.546948    6384 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:11:49.546978    6384 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:11:49.555610    6384 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0718 21:11:49.573319    6384 start.go:159] libmachine.API.Create for "force-systemd-flag-439000" (driver="qemu2")
	I0718 21:11:49.573349    6384 client.go:168] LocalClient.Create starting
	I0718 21:11:49.573412    6384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:11:49.573445    6384 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:49.573456    6384 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:49.573495    6384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:11:49.573518    6384 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:49.573527    6384 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:49.574010    6384 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:11:49.701197    6384 main.go:141] libmachine: Creating SSH key...
	I0718 21:11:49.826355    6384 main.go:141] libmachine: Creating Disk image...
	I0718 21:11:49.826360    6384 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:11:49.826531    6384 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2
	I0718 21:11:49.835690    6384 main.go:141] libmachine: STDOUT: 
	I0718 21:11:49.835709    6384 main.go:141] libmachine: STDERR: 
	I0718 21:11:49.835764    6384 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2 +20000M
	I0718 21:11:49.843741    6384 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:11:49.843756    6384 main.go:141] libmachine: STDERR: 
	I0718 21:11:49.843774    6384 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2
	I0718 21:11:49.843782    6384 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:11:49.843795    6384 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:11:49.843825    6384 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:e0:e6:c7:7e:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2
	I0718 21:11:49.845389    6384 main.go:141] libmachine: STDOUT: 
	I0718 21:11:49.845405    6384 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:11:49.845422    6384 client.go:171] duration metric: took 272.076667ms to LocalClient.Create
	I0718 21:11:51.847539    6384 start.go:128] duration metric: took 2.300607083s to createHost
	I0718 21:11:51.847586    6384 start.go:83] releasing machines lock for "force-systemd-flag-439000", held for 2.300707125s
	W0718 21:11:51.847656    6384 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:51.858728    6384 out.go:177] * Deleting "force-systemd-flag-439000" in qemu2 ...
	W0718 21:11:51.877424    6384 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:51.877446    6384 start.go:729] Will try again in 5 seconds ...
	I0718 21:11:56.879492    6384 start.go:360] acquireMachinesLock for force-systemd-flag-439000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:11:56.880047    6384 start.go:364] duration metric: took 423.167µs to acquireMachinesLock for "force-systemd-flag-439000"
	I0718 21:11:56.880185    6384 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:11:56.880478    6384 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:11:56.889834    6384 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0718 21:11:56.943095    6384 start.go:159] libmachine.API.Create for "force-systemd-flag-439000" (driver="qemu2")
	I0718 21:11:56.943148    6384 client.go:168] LocalClient.Create starting
	I0718 21:11:56.943299    6384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:11:56.943362    6384 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:56.943380    6384 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:56.943443    6384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:11:56.943488    6384 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:56.943500    6384 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:56.943997    6384 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:11:57.087914    6384 main.go:141] libmachine: Creating SSH key...
	I0718 21:11:57.202779    6384 main.go:141] libmachine: Creating Disk image...
	I0718 21:11:57.202788    6384 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:11:57.202962    6384 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2
	I0718 21:11:57.212364    6384 main.go:141] libmachine: STDOUT: 
	I0718 21:11:57.212384    6384 main.go:141] libmachine: STDERR: 
	I0718 21:11:57.212437    6384 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2 +20000M
	I0718 21:11:57.220172    6384 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:11:57.220185    6384 main.go:141] libmachine: STDERR: 
	I0718 21:11:57.220196    6384 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2
	I0718 21:11:57.220201    6384 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:11:57.220214    6384 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:11:57.220240    6384 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:98:95:84:17:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-flag-439000/disk.qcow2
	I0718 21:11:57.221767    6384 main.go:141] libmachine: STDOUT: 
	I0718 21:11:57.221784    6384 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:11:57.221796    6384 client.go:171] duration metric: took 278.649208ms to LocalClient.Create
	I0718 21:11:59.223916    6384 start.go:128] duration metric: took 2.343476417s to createHost
	I0718 21:11:59.223967    6384 start.go:83] releasing machines lock for "force-systemd-flag-439000", held for 2.343958667s
	W0718 21:11:59.224428    6384 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-439000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-439000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:59.236101    6384 out.go:177] 
	W0718 21:11:59.240153    6384 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:11:59.240187    6384 out.go:239] * 
	* 
	W0718 21:11:59.242672    6384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:11:59.251904    6384 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-439000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-439000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-439000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.496042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-439000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-439000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-439000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-18 21:11:59.344239 -0700 PDT m=+2829.432721376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-439000 -n force-systemd-flag-439000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-439000 -n force-systemd-flag-439000: exit status 7 (32.989333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-439000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-439000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-439000
--- FAIL: TestForceSystemdFlag (10.02s)

                                                
                                    
x
+
TestForceSystemdEnv (10.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-598000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-598000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.601937333s)

                                                
                                                
-- stdout --
	* [force-systemd-env-598000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-598000" primary control-plane node in "force-systemd-env-598000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-598000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:11:43.643218    6352 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:11:43.643342    6352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:11:43.643345    6352 out.go:304] Setting ErrFile to fd 2...
	I0718 21:11:43.643348    6352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:11:43.643464    6352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:11:43.644497    6352 out.go:298] Setting JSON to false
	I0718 21:11:43.660732    6352 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4271,"bootTime":1721358032,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:11:43.660800    6352 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:11:43.665814    6352 out.go:177] * [force-systemd-env-598000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:11:43.672708    6352 notify.go:220] Checking for updates...
	I0718 21:11:43.676758    6352 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:11:43.683674    6352 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:11:43.691758    6352 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:11:43.707369    6352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:11:43.715698    6352 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:11:43.723787    6352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0718 21:11:43.728039    6352 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:11:43.728088    6352 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:11:43.731735    6352 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:11:43.738734    6352 start.go:297] selected driver: qemu2
	I0718 21:11:43.738740    6352 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:11:43.738745    6352 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:11:43.741080    6352 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:11:43.744742    6352 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:11:43.748660    6352 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 21:11:43.748686    6352 cni.go:84] Creating CNI manager for ""
	I0718 21:11:43.748695    6352 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:11:43.748700    6352 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:11:43.748727    6352 start.go:340] cluster config:
	{Name:force-systemd-env-598000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-598000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:11:43.752594    6352 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:43.759791    6352 out.go:177] * Starting "force-systemd-env-598000" primary control-plane node in "force-systemd-env-598000" cluster
	I0718 21:11:43.763690    6352 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:11:43.763707    6352 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:11:43.763715    6352 cache.go:56] Caching tarball of preloaded images
	I0718 21:11:43.763777    6352 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:11:43.763783    6352 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:11:43.763837    6352 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/force-systemd-env-598000/config.json ...
	I0718 21:11:43.763849    6352 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/force-systemd-env-598000/config.json: {Name:mk7fc497dbf1f0356bd3a8791a83215b07aa11c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:11:43.764075    6352 start.go:360] acquireMachinesLock for force-systemd-env-598000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:11:43.764110    6352 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "force-systemd-env-598000"
	I0718 21:11:43.764120    6352 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-598000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-598000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:11:43.764154    6352 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:11:43.768852    6352 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0718 21:11:43.786036    6352 start.go:159] libmachine.API.Create for "force-systemd-env-598000" (driver="qemu2")
	I0718 21:11:43.786072    6352 client.go:168] LocalClient.Create starting
	I0718 21:11:43.786136    6352 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:11:43.786167    6352 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:43.786176    6352 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:43.786215    6352 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:11:43.786242    6352 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:43.786261    6352 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:43.786597    6352 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:11:43.917688    6352 main.go:141] libmachine: Creating SSH key...
	I0718 21:11:43.988475    6352 main.go:141] libmachine: Creating Disk image...
	I0718 21:11:43.988479    6352 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:11:43.988639    6352 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2
	I0718 21:11:43.998040    6352 main.go:141] libmachine: STDOUT: 
	I0718 21:11:43.998058    6352 main.go:141] libmachine: STDERR: 
	I0718 21:11:43.998131    6352 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2 +20000M
	I0718 21:11:44.006450    6352 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:11:44.006464    6352 main.go:141] libmachine: STDERR: 
	I0718 21:11:44.006481    6352 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2
	I0718 21:11:44.006494    6352 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:11:44.006510    6352 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:11:44.006540    6352 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:1f:73:f3:e7:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2
	I0718 21:11:44.008281    6352 main.go:141] libmachine: STDOUT: 
	I0718 21:11:44.008297    6352 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:11:44.008315    6352 client.go:171] duration metric: took 222.24125ms to LocalClient.Create
	I0718 21:11:46.010365    6352 start.go:128] duration metric: took 2.246268875s to createHost
	I0718 21:11:46.010380    6352 start.go:83] releasing machines lock for "force-systemd-env-598000", held for 2.246330792s
	W0718 21:11:46.010396    6352 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:46.018928    6352 out.go:177] * Deleting "force-systemd-env-598000" in qemu2 ...
	W0718 21:11:46.027334    6352 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:46.027345    6352 start.go:729] Will try again in 5 seconds ...
	I0718 21:11:51.029430    6352 start.go:360] acquireMachinesLock for force-systemd-env-598000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:11:51.847721    6352 start.go:364] duration metric: took 818.196959ms to acquireMachinesLock for "force-systemd-env-598000"
	I0718 21:11:51.847879    6352 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-598000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-598000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:11:51.848177    6352 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:11:51.853770    6352 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0718 21:11:51.901863    6352 start.go:159] libmachine.API.Create for "force-systemd-env-598000" (driver="qemu2")
	I0718 21:11:51.901908    6352 client.go:168] LocalClient.Create starting
	I0718 21:11:51.902035    6352 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:11:51.902101    6352 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:51.902124    6352 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:51.902187    6352 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:11:51.902233    6352 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:51.902244    6352 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:51.902928    6352 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:11:52.053785    6352 main.go:141] libmachine: Creating SSH key...
	I0718 21:11:52.148194    6352 main.go:141] libmachine: Creating Disk image...
	I0718 21:11:52.148200    6352 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:11:52.148395    6352 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2
	I0718 21:11:52.158123    6352 main.go:141] libmachine: STDOUT: 
	I0718 21:11:52.158139    6352 main.go:141] libmachine: STDERR: 
	I0718 21:11:52.158199    6352 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2 +20000M
	I0718 21:11:52.166343    6352 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:11:52.166357    6352 main.go:141] libmachine: STDERR: 
	I0718 21:11:52.166370    6352 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2
	I0718 21:11:52.166374    6352 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:11:52.166385    6352 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:11:52.166414    6352 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:b1:b7:35:9a:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/force-systemd-env-598000/disk.qcow2
	I0718 21:11:52.168091    6352 main.go:141] libmachine: STDOUT: 
	I0718 21:11:52.168105    6352 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:11:52.168116    6352 client.go:171] duration metric: took 266.207916ms to LocalClient.Create
	I0718 21:11:54.170319    6352 start.go:128] duration metric: took 2.322168833s to createHost
	I0718 21:11:54.170386    6352 start.go:83] releasing machines lock for "force-systemd-env-598000", held for 2.322681666s
	W0718 21:11:54.170766    6352 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-598000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-598000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:54.183477    6352 out.go:177] 
	W0718 21:11:54.189403    6352 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:11:54.189453    6352 out.go:239] * 
	* 
	W0718 21:11:54.192340    6352 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:11:54.201333    6352 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-598000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-598000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-598000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.669916ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-598000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-598000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-598000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-18 21:11:54.298271 -0700 PDT m=+2824.386607292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-598000 -n force-systemd-env-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-598000 -n force-systemd-env-598000: exit status 7 (33.633708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-598000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-598000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-598000
--- FAIL: TestForceSystemdEnv (10.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-020000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-020000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-tjdxh" [2423f3b1-c36d-45dd-b921-48358d6e5057] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-tjdxh" [2423f3b1-c36d-45dd-b921-48358d6e5057] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.012813958s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:32116
functional_test.go:1657: error fetching http://192.168.105.4:32116: Get "http://192.168.105.4:32116": dial tcp 192.168.105.4:32116: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32116: Get "http://192.168.105.4:32116": dial tcp 192.168.105.4:32116: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32116: Get "http://192.168.105.4:32116": dial tcp 192.168.105.4:32116: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32116: Get "http://192.168.105.4:32116": dial tcp 192.168.105.4:32116: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32116: Get "http://192.168.105.4:32116": dial tcp 192.168.105.4:32116: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32116: Get "http://192.168.105.4:32116": dial tcp 192.168.105.4:32116: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32116: Get "http://192.168.105.4:32116": dial tcp 192.168.105.4:32116: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32116: Get "http://192.168.105.4:32116": dial tcp 192.168.105.4:32116: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:32116: Get "http://192.168.105.4:32116": dial tcp 192.168.105.4:32116: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-020000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-tjdxh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-020000/192.168.105.4
Start Time:       Thu, 18 Jul 2024 20:35:26 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://0bd2e7e7ebde7a98cbc9e93e7d89a7b25ba12fe306f1be873a73e0855c9d426c
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 18 Jul 2024 20:35:42 -0700
Finished:     Thu, 18 Jul 2024 20:35:42 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-chcbs (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-chcbs:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-tjdxh to functional-020000
Normal   Pulled     20s (x3 over 36s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    20s (x3 over 36s)  kubelet            Created container echoserver-arm
Normal   Started    20s (x3 over 36s)  kubelet            Started container echoserver-arm
Warning  BackOff    4s (x3 over 34s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-tjdxh_default(2423f3b1-c36d-45dd-b921-48358d6e5057)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-020000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-020000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.249.234
IPs:                      10.103.249.234
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32116/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-020000 -n functional-020000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-020000 ssh findmnt                                                                                        | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh -- ls                                                                                          | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh cat                                                                                            | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | /mount-9p/test-1721360150538138000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh stat                                                                                           | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh stat                                                                                           | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh sudo                                                                                           | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh findmnt                                                                                        | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-020000                                                                                                 | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1558104097/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh findmnt                                                                                        | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh -- ls                                                                                          | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh sudo                                                                                           | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-020000                                                                                                 | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2093368901/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-020000                                                                                                 | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2093368901/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh findmnt                                                                                        | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-020000                                                                                                 | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2093368901/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh findmnt                                                                                        | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh findmnt                                                                                        | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh findmnt                                                                                        | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh findmnt                                                                                        | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-020000 ssh findmnt                                                                                        | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-020000                                                                                                 | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-020000                                                                                                 | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-020000 --dry-run                                                                                       | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-020000                                                                                                 | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|           | -p functional-020000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:35:57
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:35:57.751787    4674 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:35:57.751899    4674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:35:57.751903    4674 out.go:304] Setting ErrFile to fd 2...
	I0718 20:35:57.751905    4674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:35:57.752024    4674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:35:57.753347    4674 out.go:298] Setting JSON to false
	I0718 20:35:57.770342    4674 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2125,"bootTime":1721358032,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:35:57.770454    4674 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:35:57.774763    4674 out.go:177] * [functional-020000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0718 20:35:57.783130    4674 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:35:57.783193    4674 notify.go:220] Checking for updates...
	I0718 20:35:57.790726    4674 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:35:57.793784    4674 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:35:57.796815    4674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:35:57.800741    4674 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 20:35:57.803814    4674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:35:57.807005    4674 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:35:57.807245    4674 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:35:57.811696    4674 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0718 20:35:57.818689    4674 start.go:297] selected driver: qemu2
	I0718 20:35:57.818694    4674 start.go:901] validating driver "qemu2" against &{Name:functional-020000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-020000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:35:57.818741    4674 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:35:57.824717    4674 out.go:177] 
	W0718 20:35:57.828777    4674 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0718 20:35:57.831815    4674 out.go:177] 
	
	
	==> Docker <==
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.516508427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.516554177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.516559594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.516798509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:35:58 functional-020000 dockerd[5823]: time="2024-07-19T03:35:58.555279162Z" level=info msg="ignoring event" container=6b854c6005a58e905f82a38381bf39f3426896b248c312a69d0d6ec9df2b0814 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.555555494Z" level=info msg="shim disconnected" id=6b854c6005a58e905f82a38381bf39f3426896b248c312a69d0d6ec9df2b0814 namespace=moby
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.555588369Z" level=warning msg="cleaning up after shim disconnected" id=6b854c6005a58e905f82a38381bf39f3426896b248c312a69d0d6ec9df2b0814 namespace=moby
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.555592744Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.725780723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.725844265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.725850765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.726059305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.748810174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.748865049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.748980881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:35:58 functional-020000 dockerd[5829]: time="2024-07-19T03:35:58.752203613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:35:58 functional-020000 cri-dockerd[6094]: time="2024-07-19T03:35:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3f6dfb2e5fd595cc68801fc907d79b631c71858890cdc9eb7b9b448737fec40e/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 03:35:58 functional-020000 cri-dockerd[6094]: time="2024-07-19T03:35:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b0f3082c4d2589edc96ce7ebdd8004115fe2dbb4f2689dc9212711a90d2a41c6/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 03:35:59 functional-020000 dockerd[5823]: time="2024-07-19T03:35:59.037157128Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Jul 19 03:36:00 functional-020000 cri-dockerd[6094]: time="2024-07-19T03:36:00Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Jul 19 03:36:00 functional-020000 dockerd[5829]: time="2024-07-19T03:36:00.702351100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:36:00 functional-020000 dockerd[5829]: time="2024-07-19T03:36:00.702382558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:36:00 functional-020000 dockerd[5829]: time="2024-07-19T03:36:00.702391141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:36:00 functional-020000 dockerd[5829]: time="2024-07-19T03:36:00.702420641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:36:00 functional-020000 dockerd[5823]: time="2024-07-19T03:36:00.917212212Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	7892180db9e9e       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   2 seconds ago        Running             dashboard-metrics-scraper   0                   3f6dfb2e5fd59       dashboard-metrics-scraper-b5fc48f67-znvdd
	6b854c6005a58       72565bf5bbedf                                                                                          4 seconds ago        Exited              echoserver-arm              3                   aad1e397eb74c       hello-node-65f5d5cc78-6gxzl
	a415f9da6a9cb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    10 seconds ago       Exited              mount-munger                0                   8a72602ec544e       busybox-mount
	e2be513035071       nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df                          18 seconds ago       Running             myfrontend                  0                   aea0f35310ce2       sp-pod
	0bd2e7e7ebde7       72565bf5bbedf                                                                                          20 seconds ago       Exited              echoserver-arm              2                   3c67ecf946bab       hello-node-connect-6f49f58cd5-tjdxh
	d25775f73a574       nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                          42 seconds ago       Running             nginx                       0                   8719442e42792       nginx-svc
	7a9f60a44ec1e       2437cf7621777                                                                                          About a minute ago   Running             coredns                     2                   8cc5babf94d64       coredns-7db6d8ff4d-pcg5s
	14cd9af455a26       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         3                   d9ea613cf2629       storage-provisioner
	fb1484792fc64       2351f570ed0ea                                                                                          About a minute ago   Running             kube-proxy                  2                   f059dbde1df82       kube-proxy-dtmv9
	a65977d710c4c       014faa467e297                                                                                          About a minute ago   Running             etcd                        2                   11a71a2aa3df8       etcd-functional-020000
	a954c440ac1fb       d48f992a22722                                                                                          About a minute ago   Running             kube-scheduler              2                   f68a502dc5237       kube-scheduler-functional-020000
	a9610d2195e50       8e97cdb19e7cc                                                                                          About a minute ago   Running             kube-controller-manager     2                   5b8f753f6a578       kube-controller-manager-functional-020000
	1fd36777d1847       61773190d42ff                                                                                          About a minute ago   Running             kube-apiserver              0                   e7c1117f7dfa7       kube-apiserver-functional-020000
	c16d31cfd09db       ba04bb24b9575                                                                                          About a minute ago   Exited              storage-provisioner         2                   d93ec1277e3cb       storage-provisioner
	3588a999a500c       2437cf7621777                                                                                          About a minute ago   Exited              coredns                     1                   01c6b84dbf647       coredns-7db6d8ff4d-pcg5s
	f952a9983cd1d       2351f570ed0ea                                                                                          About a minute ago   Exited              kube-proxy                  1                   c91f421d699d6       kube-proxy-dtmv9
	cd1781c4f8a95       8e97cdb19e7cc                                                                                          About a minute ago   Exited              kube-controller-manager     1                   6a31ab9152ca0       kube-controller-manager-functional-020000
	3c52ca096c5c1       d48f992a22722                                                                                          About a minute ago   Exited              kube-scheduler              1                   1ac116892c9a0       kube-scheduler-functional-020000
	94d080a49ef9c       014faa467e297                                                                                          About a minute ago   Exited              etcd                        1                   b7d82019f1ca2       etcd-functional-020000
	
	
	==> coredns [3588a999a500] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37211 - 6808 "HINFO IN 311184100781433966.1782507326362566847. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.010314687s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7a9f60a44ec1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36137 - 29693 "HINFO IN 1521493289120668976.8385337706574660953. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010131631s
	[INFO] 10.244.0.1:8293 - 27201 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000093333s
	[INFO] 10.244.0.1:33852 - 27626 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000100958s
	[INFO] 10.244.0.1:31866 - 12821 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000050042s
	[INFO] 10.244.0.1:18251 - 15695 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001168369s
	[INFO] 10.244.0.1:25131 - 15932 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000060625s
	[INFO] 10.244.0.1:2427 - 45816 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000087791s
	
	
	==> describe nodes <==
	Name:               functional-020000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-020000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=functional-020000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_18T20_33_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:33:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-020000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:35:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:35:49 +0000   Fri, 19 Jul 2024 03:33:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:35:49 +0000   Fri, 19 Jul 2024 03:33:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:35:49 +0000   Fri, 19 Jul 2024 03:33:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:35:49 +0000   Fri, 19 Jul 2024 03:33:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-020000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b8fa1783ee8409ca28f8216e17289f0
	  System UUID:                1b8fa1783ee8409ca28f8216e17289f0
	  Boot ID:                    78b19ecb-e36c-405d-a850-084b7d058acc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-6gxzl                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  default                     hello-node-connect-6f49f58cd5-tjdxh          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 coredns-7db6d8ff4d-pcg5s                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m18s
	  kube-system                 etcd-functional-020000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m33s
	  kube-system                 kube-apiserver-functional-020000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-functional-020000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 kube-proxy-dtmv9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-scheduler-functional-020000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-znvdd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-tsx9h        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m18s              kube-proxy       
	  Normal  Starting                 73s                kube-proxy       
	  Normal  Starting                 115s               kube-proxy       
	  Normal  NodeHasSufficientMemory  2m33s              kubelet          Node functional-020000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m33s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m33s              kubelet          Node functional-020000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m33s              kubelet          Node functional-020000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m33s              kubelet          Starting kubelet.
	  Normal  NodeReady                2m29s              kubelet          Node functional-020000 status is now: NodeReady
	  Normal  RegisteredNode           2m19s              node-controller  Node functional-020000 event: Registered Node functional-020000 in Controller
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node functional-020000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node functional-020000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)    kubelet          Node functional-020000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           104s               node-controller  Node functional-020000 event: Registered Node functional-020000 in Controller
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node functional-020000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node functional-020000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node functional-020000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                node-controller  Node functional-020000 event: Registered Node functional-020000 in Controller
	
	
	==> dmesg <==
	[ +12.150404] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.407884] systemd-fstab-generator[4886]: Ignoring "noauto" option for root device
	[ +10.453465] systemd-fstab-generator[5355]: Ignoring "noauto" option for root device
	[  +0.053704] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.102092] systemd-fstab-generator[5390]: Ignoring "noauto" option for root device
	[  +0.099932] systemd-fstab-generator[5402]: Ignoring "noauto" option for root device
	[  +0.093088] systemd-fstab-generator[5416]: Ignoring "noauto" option for root device
	[  +5.128560] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.340846] systemd-fstab-generator[6042]: Ignoring "noauto" option for root device
	[  +0.077544] systemd-fstab-generator[6054]: Ignoring "noauto" option for root device
	[  +0.078991] systemd-fstab-generator[6066]: Ignoring "noauto" option for root device
	[  +0.081975] systemd-fstab-generator[6081]: Ignoring "noauto" option for root device
	[  +0.199934] systemd-fstab-generator[6246]: Ignoring "noauto" option for root device
	[  +1.185288] systemd-fstab-generator[6371]: Ignoring "noauto" option for root device
	[  +1.247603] kauditd_printk_skb: 189 callbacks suppressed
	[Jul19 03:35] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.220122] systemd-fstab-generator[7351]: Ignoring "noauto" option for root device
	[  +4.800832] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.500333] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.037910] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.387004] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.111426] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.278593] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.865294] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.638615] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [94d080a49ef9] <==
	{"level":"info","ts":"2024-07-19T03:34:03.351154Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-19T03:34:05.226413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T03:34:05.226557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T03:34:05.2266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-19T03:34:05.226629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T03:34:05.226645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-19T03:34:05.226668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T03:34:05.226691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-19T03:34:05.231385Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-020000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T03:34:05.231404Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T03:34:05.231445Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T03:34:05.231901Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T03:34:05.231926Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T03:34:05.23577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T03:34:05.236517Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-19T03:34:30.329799Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T03:34:30.329845Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-020000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-19T03:34:30.329885Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T03:34:30.329929Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T03:34:30.345474Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T03:34:30.345497Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T03:34:30.345545Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-19T03:34:30.347322Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-19T03:34:30.347353Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-19T03:34:30.347359Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-020000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [a65977d710c4] <==
	{"level":"info","ts":"2024-07-19T03:34:45.714464Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T03:34:45.714499Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-19T03:34:45.714547Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T03:34:45.714579Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T03:34:45.714582Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T03:34:45.714692Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-19T03:34:45.714701Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-19T03:34:45.715495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-19T03:34:45.715547Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-19T03:34:45.715579Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T03:34:45.715623Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T03:34:47.604711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-19T03:34:47.604845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-19T03:34:47.604889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-19T03:34:47.604925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-19T03:34:47.604941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-19T03:34:47.604966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-19T03:34:47.604986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-19T03:34:47.609336Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-020000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T03:34:47.609363Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T03:34:47.609407Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T03:34:47.60974Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T03:34:47.610557Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T03:34:47.614293Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-19T03:34:47.615536Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:36:02 up 2 min,  0 users,  load average: 0.54, 0.40, 0.17
	Linux functional-020000 5.10.207 #1 SMP PREEMPT Thu Jul 18 19:24:21 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1fd36777d184] <==
	I0719 03:34:48.218506       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0719 03:34:48.219867       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 03:34:48.220462       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 03:34:48.231966       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 03:34:48.231977       1 aggregator.go:165] initial CRD sync complete...
	I0719 03:34:48.231980       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 03:34:48.231984       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 03:34:48.231986       1 cache.go:39] Caches are synced for autoregister controller
	I0719 03:34:48.245666       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 03:34:49.119867       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 03:34:49.530211       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 03:34:49.533747       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 03:34:49.547646       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 03:34:49.554550       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 03:34:49.556488       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 03:35:00.586432       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 03:35:00.641933       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 03:35:07.606858       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.23.0"}
	I0719 03:35:13.064692       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0719 03:35:13.107948       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.43.213"}
	I0719 03:35:17.136717       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.249.250"}
	I0719 03:35:26.531616       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.249.234"}
	I0719 03:35:58.325214       1 controller.go:615] quota admission added evaluator for: namespaces
	I0719 03:35:58.425059       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.225.64"}
	I0719 03:35:58.432996       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.22.156"}
	
	
	==> kube-controller-manager [a9610d2195e5] <==
	I0719 03:35:45.473980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="189.124µs"
	I0719 03:35:58.351362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.053698ms"
	E0719 03:35:58.351429       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 03:35:58.356542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="5.092429ms"
	E0719 03:35:58.356566       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 03:35:58.357176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="9.45782ms"
	E0719 03:35:58.357191       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 03:35:58.358846       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="2.268445ms"
	E0719 03:35:58.358863       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 03:35:58.361456       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="2.727692ms"
	E0719 03:35:58.361470       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 03:35:58.366027       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="1.961739ms"
	E0719 03:35:58.366043       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 03:35:58.374566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="7.877872ms"
	I0719 03:35:58.380518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="5.926424ms"
	I0719 03:35:58.380562       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="25.208µs"
	I0719 03:35:58.387714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="22.458µs"
	I0719 03:35:58.406230       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="18.484102ms"
	I0719 03:35:58.419397       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="13.141758ms"
	I0719 03:35:58.419824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="11.917µs"
	I0719 03:35:58.422472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="19.5µs"
	I0719 03:35:58.476942       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="28.166µs"
	I0719 03:35:59.011854       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="30.958µs"
	I0719 03:36:01.037960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="2.68624ms"
	I0719 03:36:01.038073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="17.292µs"
	
	
	==> kube-controller-manager [cd1781c4f8a9] <==
	I0719 03:34:18.987502       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0719 03:34:18.992301       1 shared_informer.go:320] Caches are synced for namespace
	I0719 03:34:18.993485       1 shared_informer.go:320] Caches are synced for service account
	I0719 03:34:18.995369       1 shared_informer.go:320] Caches are synced for stateful set
	I0719 03:34:19.011693       1 shared_informer.go:320] Caches are synced for job
	I0719 03:34:19.015945       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0719 03:34:19.018325       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0719 03:34:19.022173       1 shared_informer.go:320] Caches are synced for deployment
	I0719 03:34:19.025623       1 shared_informer.go:320] Caches are synced for persistent volume
	I0719 03:34:19.026904       1 shared_informer.go:320] Caches are synced for attach detach
	I0719 03:34:19.036740       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0719 03:34:19.036791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.667µs"
	I0719 03:34:19.036747       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0719 03:34:19.036839       1 shared_informer.go:320] Caches are synced for HPA
	I0719 03:34:19.037009       1 shared_informer.go:320] Caches are synced for daemon sets
	I0719 03:34:19.041416       1 shared_informer.go:320] Caches are synced for PVC protection
	I0719 03:34:19.110697       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 03:34:19.110697       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0719 03:34:19.125846       1 shared_informer.go:320] Caches are synced for disruption
	I0719 03:34:19.126876       1 shared_informer.go:320] Caches are synced for endpoint
	I0719 03:34:19.144214       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 03:34:19.187522       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0719 03:34:19.553593       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 03:34:19.637149       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 03:34:19.637171       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [f952a9983cd1] <==
	I0719 03:34:07.072450       1 server_linux.go:69] "Using iptables proxy"
	I0719 03:34:07.078242       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0719 03:34:07.093073       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 03:34:07.093094       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 03:34:07.093104       1 server_linux.go:165] "Using iptables Proxier"
	I0719 03:34:07.094057       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 03:34:07.094123       1 server.go:872] "Version info" version="v1.30.3"
	I0719 03:34:07.094137       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:34:07.094682       1 config.go:192] "Starting service config controller"
	I0719 03:34:07.094685       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 03:34:07.094698       1 config.go:101] "Starting endpoint slice config controller"
	I0719 03:34:07.094700       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 03:34:07.094829       1 config.go:319] "Starting node config controller"
	I0719 03:34:07.094831       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 03:34:07.195143       1 shared_informer.go:320] Caches are synced for node config
	I0719 03:34:07.195214       1 shared_informer.go:320] Caches are synced for service config
	I0719 03:34:07.195225       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fb1484792fc6] <==
	I0719 03:34:48.965702       1 server_linux.go:69] "Using iptables proxy"
	I0719 03:34:48.973095       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0719 03:34:48.983268       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 03:34:48.983288       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 03:34:48.983297       1 server_linux.go:165] "Using iptables Proxier"
	I0719 03:34:48.983929       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 03:34:48.984015       1 server.go:872] "Version info" version="v1.30.3"
	I0719 03:34:48.984025       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:34:48.984427       1 config.go:192] "Starting service config controller"
	I0719 03:34:48.984462       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 03:34:48.984476       1 config.go:101] "Starting endpoint slice config controller"
	I0719 03:34:48.984506       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 03:34:48.984759       1 config.go:319] "Starting node config controller"
	I0719 03:34:48.984781       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 03:34:49.084927       1 shared_informer.go:320] Caches are synced for node config
	I0719 03:34:49.084969       1 shared_informer.go:320] Caches are synced for service config
	I0719 03:34:49.084972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3c52ca096c5c] <==
	I0719 03:34:03.683362       1 serving.go:380] Generated self-signed cert in-memory
	W0719 03:34:05.781852       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 03:34:05.781868       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 03:34:05.781872       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 03:34:05.781875       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 03:34:05.802070       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 03:34:05.802152       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:34:05.802860       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 03:34:05.802929       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 03:34:05.802964       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 03:34:05.802985       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 03:34:05.903229       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 03:34:30.354950       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0719 03:34:30.354976       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0719 03:34:30.355045       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0719 03:34:30.355079       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a954c440ac1f] <==
	I0719 03:34:45.996544       1 serving.go:380] Generated self-signed cert in-memory
	W0719 03:34:48.142627       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 03:34:48.142665       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 03:34:48.142675       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 03:34:48.142695       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 03:34:48.158695       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 03:34:48.158742       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:34:48.159451       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 03:34:48.159512       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 03:34:48.159527       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 03:34:48.159539       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 03:34:48.259719       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 03:35:45 functional-020000 kubelet[6378]: I0719 03:35:45.473289    6378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.770051747 podStartE2EDuration="3.473279408s" podCreationTimestamp="2024-07-19 03:35:42 +0000 UTC" firstStartedPulling="2024-07-19 03:35:43.354179278 +0000 UTC m=+58.964188605" lastFinishedPulling="2024-07-19 03:35:44.057406939 +0000 UTC m=+59.667416266" observedRunningTime="2024-07-19 03:35:44.911883239 +0000 UTC m=+60.521892566" watchObservedRunningTime="2024-07-19 03:35:45.473279408 +0000 UTC m=+61.083288735"
	Jul 19 03:35:51 functional-020000 kubelet[6378]: I0719 03:35:51.443115    6378 topology_manager.go:215] "Topology Admit Handler" podUID="766889b0-bff8-47cd-90be-8b9ead0c2674" podNamespace="default" podName="busybox-mount"
	Jul 19 03:35:51 functional-020000 kubelet[6378]: I0719 03:35:51.551476    6378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/766889b0-bff8-47cd-90be-8b9ead0c2674-test-volume\") pod \"busybox-mount\" (UID: \"766889b0-bff8-47cd-90be-8b9ead0c2674\") " pod="default/busybox-mount"
	Jul 19 03:35:51 functional-020000 kubelet[6378]: I0719 03:35:51.551497    6378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzpd5\" (UniqueName: \"kubernetes.io/projected/766889b0-bff8-47cd-90be-8b9ead0c2674-kube-api-access-fzpd5\") pod \"busybox-mount\" (UID: \"766889b0-bff8-47cd-90be-8b9ead0c2674\") " pod="default/busybox-mount"
	Jul 19 03:35:54 functional-020000 kubelet[6378]: I0719 03:35:54.175062    6378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzpd5\" (UniqueName: \"kubernetes.io/projected/766889b0-bff8-47cd-90be-8b9ead0c2674-kube-api-access-fzpd5\") pod \"766889b0-bff8-47cd-90be-8b9ead0c2674\" (UID: \"766889b0-bff8-47cd-90be-8b9ead0c2674\") "
	Jul 19 03:35:54 functional-020000 kubelet[6378]: I0719 03:35:54.175084    6378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/766889b0-bff8-47cd-90be-8b9ead0c2674-test-volume\") pod \"766889b0-bff8-47cd-90be-8b9ead0c2674\" (UID: \"766889b0-bff8-47cd-90be-8b9ead0c2674\") "
	Jul 19 03:35:54 functional-020000 kubelet[6378]: I0719 03:35:54.175126    6378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/766889b0-bff8-47cd-90be-8b9ead0c2674-test-volume" (OuterVolumeSpecName: "test-volume") pod "766889b0-bff8-47cd-90be-8b9ead0c2674" (UID: "766889b0-bff8-47cd-90be-8b9ead0c2674"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 19 03:35:54 functional-020000 kubelet[6378]: I0719 03:35:54.177822    6378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766889b0-bff8-47cd-90be-8b9ead0c2674-kube-api-access-fzpd5" (OuterVolumeSpecName: "kube-api-access-fzpd5") pod "766889b0-bff8-47cd-90be-8b9ead0c2674" (UID: "766889b0-bff8-47cd-90be-8b9ead0c2674"). InnerVolumeSpecName "kube-api-access-fzpd5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 03:35:54 functional-020000 kubelet[6378]: I0719 03:35:54.276160    6378 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fzpd5\" (UniqueName: \"kubernetes.io/projected/766889b0-bff8-47cd-90be-8b9ead0c2674-kube-api-access-fzpd5\") on node \"functional-020000\" DevicePath \"\""
	Jul 19 03:35:54 functional-020000 kubelet[6378]: I0719 03:35:54.276170    6378 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/766889b0-bff8-47cd-90be-8b9ead0c2674-test-volume\") on node \"functional-020000\" DevicePath \"\""
	Jul 19 03:35:54 functional-020000 kubelet[6378]: I0719 03:35:54.975158    6378 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a72602ec544eb994ecca48df7a74fb80e1c39e898d3b411ffd2310c853b14e5"
	Jul 19 03:35:58 functional-020000 kubelet[6378]: I0719 03:35:58.378125    6378 topology_manager.go:215] "Topology Admit Handler" podUID="5bb31225-a75b-40c3-8f91-57478a83194b" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-znvdd"
	Jul 19 03:35:58 functional-020000 kubelet[6378]: E0719 03:35:58.378162    6378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="766889b0-bff8-47cd-90be-8b9ead0c2674" containerName="mount-munger"
	Jul 19 03:35:58 functional-020000 kubelet[6378]: I0719 03:35:58.378179    6378 memory_manager.go:354] "RemoveStaleState removing state" podUID="766889b0-bff8-47cd-90be-8b9ead0c2674" containerName="mount-munger"
	Jul 19 03:35:58 functional-020000 kubelet[6378]: I0719 03:35:58.404621    6378 topology_manager.go:215] "Topology Admit Handler" podUID="1b616bfb-e559-46fe-84ce-f2e3dc49713c" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-tsx9h"
	Jul 19 03:35:58 functional-020000 kubelet[6378]: I0719 03:35:58.469339    6378 scope.go:117] "RemoveContainer" containerID="0bd2e7e7ebde7a98cbc9e93e7d89a7b25ba12fe306f1be873a73e0855c9d426c"
	Jul 19 03:35:58 functional-020000 kubelet[6378]: E0719 03:35:58.469442    6378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-tjdxh_default(2423f3b1-c36d-45dd-b921-48358d6e5057)\"" pod="default/hello-node-connect-6f49f58cd5-tjdxh" podUID="2423f3b1-c36d-45dd-b921-48358d6e5057"
	Jul 19 03:35:58 functional-020000 kubelet[6378]: I0719 03:35:58.469807    6378 scope.go:117] "RemoveContainer" containerID="fcfdb9ebac4d3bdc74dd2d44a956e4b6d23aaebfbdec33ac893a827191d3dc95"
	Jul 19 03:35:58 functional-020000 kubelet[6378]: I0719 03:35:58.503730    6378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1b616bfb-e559-46fe-84ce-f2e3dc49713c-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-tsx9h\" (UID: \"1b616bfb-e559-46fe-84ce-f2e3dc49713c\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-tsx9h"
	Jul 19 03:35:58 functional-020000 kubelet[6378]: I0719 03:35:58.503752    6378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spp5n\" (UniqueName: \"kubernetes.io/projected/5bb31225-a75b-40c3-8f91-57478a83194b-kube-api-access-spp5n\") pod \"dashboard-metrics-scraper-b5fc48f67-znvdd\" (UID: \"5bb31225-a75b-40c3-8f91-57478a83194b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-znvdd"
	Jul 19 03:35:58 functional-020000 kubelet[6378]: I0719 03:35:58.503764    6378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5bb31225-a75b-40c3-8f91-57478a83194b-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-znvdd\" (UID: \"5bb31225-a75b-40c3-8f91-57478a83194b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-znvdd"
	Jul 19 03:35:58 functional-020000 kubelet[6378]: I0719 03:35:58.503774    6378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgzqf\" (UniqueName: \"kubernetes.io/projected/1b616bfb-e559-46fe-84ce-f2e3dc49713c-kube-api-access-qgzqf\") pod \"kubernetes-dashboard-779776cb65-tsx9h\" (UID: \"1b616bfb-e559-46fe-84ce-f2e3dc49713c\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-tsx9h"
	Jul 19 03:35:59 functional-020000 kubelet[6378]: I0719 03:35:59.006681    6378 scope.go:117] "RemoveContainer" containerID="fcfdb9ebac4d3bdc74dd2d44a956e4b6d23aaebfbdec33ac893a827191d3dc95"
	Jul 19 03:35:59 functional-020000 kubelet[6378]: I0719 03:35:59.006809    6378 scope.go:117] "RemoveContainer" containerID="6b854c6005a58e905f82a38381bf39f3426896b248c312a69d0d6ec9df2b0814"
	Jul 19 03:35:59 functional-020000 kubelet[6378]: E0719 03:35:59.006892    6378 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-6gxzl_default(a811d80a-9191-4d98-b84e-cc77eddaa160)\"" pod="default/hello-node-65f5d5cc78-6gxzl" podUID="a811d80a-9191-4d98-b84e-cc77eddaa160"
	
	
	==> storage-provisioner [14cd9af455a2] <==
	I0719 03:34:48.943084       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 03:34:48.949754       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 03:34:48.949769       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 03:35:06.334542       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 03:35:06.334680       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85dd3422-ec1d-48f0-96fa-7cecdcb86784", APIVersion:"v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-020000_36cdef91-bb5b-401a-b989-f991607637ca became leader
	I0719 03:35:06.334695       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-020000_36cdef91-bb5b-401a-b989-f991607637ca!
	I0719 03:35:06.435227       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-020000_36cdef91-bb5b-401a-b989-f991607637ca!
	I0719 03:35:29.667213       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0719 03:35:29.667531       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"542656a7-49e2-4fa8-aee6-b0155355f85c", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0719 03:35:29.667266       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    2f8a0d92-2e60-465e-b417-3bb49ba105b8 378 0 2024-07-19 03:33:45 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-19 03:33:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-542656a7-49e2-4fa8-aee6-b0155355f85c &PersistentVolumeClaim{ObjectMeta:{myclaim  default  542656a7-49e2-4fa8-aee6-b0155355f85c 712 0 2024-07-19 03:35:29 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-19 03:35:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-19 03:35:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0719 03:35:29.667933       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-542656a7-49e2-4fa8-aee6-b0155355f85c" provisioned
	I0719 03:35:29.667950       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0719 03:35:29.667958       1 volume_store.go:212] Trying to save persistentvolume "pvc-542656a7-49e2-4fa8-aee6-b0155355f85c"
	I0719 03:35:29.675151       1 volume_store.go:219] persistentvolume "pvc-542656a7-49e2-4fa8-aee6-b0155355f85c" saved
	I0719 03:35:29.675836       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"542656a7-49e2-4fa8-aee6-b0155355f85c", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-542656a7-49e2-4fa8-aee6-b0155355f85c
	
	
	==> storage-provisioner [c16d31cfd09d] <==
	I0719 03:34:19.686673       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 03:34:19.690692       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 03:34:19.690714       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-020000 -n functional-020000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-020000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-779776cb65-tsx9h
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-020000 describe pod busybox-mount kubernetes-dashboard-779776cb65-tsx9h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-020000 describe pod busybox-mount kubernetes-dashboard-779776cb65-tsx9h: exit status 1 (40.12ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-020000/192.168.105.4
	Start Time:       Thu, 18 Jul 2024 20:35:51 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://a415f9da6a9cb5e4f030831733e04b149b3e9a18c7ad0a6f2ad69dcf18d33512
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 18 Jul 2024 20:35:52 -0700
	      Finished:     Thu, 18 Jul 2024 20:35:52 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fzpd5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fzpd5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11s   default-scheduler  Successfully assigned default/busybox-mount to functional-020000
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.044s (1.044s including waiting). Image size: 3547125 bytes.
	  Normal  Created    11s   kubelet            Created container mount-munger
	  Normal  Started    11s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-tsx9h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-020000 describe pod busybox-mount kubernetes-dashboard-779776cb65-tsx9h: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (36.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (227.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-256000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0718 20:36:43.544587    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:38:59.675260    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:39:27.381695    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-256000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 90 (3m46.1369635s)

                                                
                                                
-- stdout --
	* [ha-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-256000" primary control-plane node in "ha-256000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "ha-256000-m02" control-plane node in "ha-256000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.168.105.5
	* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	  - env NO_PROXY=192.168.105.5
	* Verifying Kubernetes components...
	
	* Starting "ha-256000-m03" control-plane node in "ha-256000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.168.105.5,192.168.105.6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:36:07.154539    4727 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:36:07.154652    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154655    4727 out.go:304] Setting ErrFile to fd 2...
	I0718 20:36:07.154657    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154787    4727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:36:07.155777    4727 out.go:298] Setting JSON to false
	I0718 20:36:07.172062    4727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2135,"bootTime":1721358032,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:36:07.172136    4727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:36:07.175769    4727 out.go:177] * [ha-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 20:36:07.182867    4727 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:36:07.182897    4727 notify.go:220] Checking for updates...
	I0718 20:36:07.188814    4727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:07.191895    4727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:36:07.192950    4727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:36:07.195871    4727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 20:36:07.198897    4727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:36:07.202011    4727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:36:07.205826    4727 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 20:36:07.212869    4727 start.go:297] selected driver: qemu2
	I0718 20:36:07.212875    4727 start.go:901] validating driver "qemu2" against <nil>
	I0718 20:36:07.212880    4727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:36:07.215027    4727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:36:07.217921    4727 out.go:177] * Automatically selected the socket_vmnet network
	I0718 20:36:07.220933    4727 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:36:07.220960    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:07.220968    4727 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0718 20:36:07.220971    4727 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0718 20:36:07.220995    4727 start.go:340] cluster config:
	{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:07.224405    4727 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:36:07.231878    4727 out.go:177] * Starting "ha-256000" primary control-plane node in "ha-256000" cluster
	I0718 20:36:07.235849    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:07.235880    4727 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 20:36:07.235892    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:07.235960    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:07.235965    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:07.236167    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:07.236181    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json: {Name:mk4f96c33b167a65b92bd4e48e5f1a3c7a52bbe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:07.236387    4727 start.go:360] acquireMachinesLock for ha-256000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:07.236422    4727 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "ha-256000"
	I0718 20:36:07.236432    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:07.236461    4727 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 20:36:07.243901    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:07.268930    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:07.268958    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:07.269026    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:07.269056    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269065    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269104    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:07.269127    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269136    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269466    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:07.395393    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:07.434010    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:07.434014    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:07.434195    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.445169    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.445186    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.445241    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2 +20000M
	I0718 20:36:07.453205    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:07.453220    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.453236    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.453239    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:07.453248    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:07.453278    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:e3:ed:16:92:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.491921    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.491947    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.491951    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:07.491963    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:07.492029    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:07.492048    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:07.492054    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:07.492061    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:07.492067    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:09.494175    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:09.494254    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:09.494618    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:09.494729    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:09.494764    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:09.494789    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:09.494817    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:11.496994    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:11.497242    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:11.497663    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:11.497717    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:11.497756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:11.497787    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:11.497819    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:13.500006    4727 main.go:141] libmachine: Attempt 3
	I0718 20:36:13.500080    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:13.500185    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:13.500200    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:13.500205    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:13.500210    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:13.500216    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:15.502208    4727 main.go:141] libmachine: Attempt 4
	I0718 20:36:15.502220    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:15.502255    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:15.502275    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:15.502280    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:15.502285    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:15.502290    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:17.504286    4727 main.go:141] libmachine: Attempt 5
	I0718 20:36:17.504293    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:17.504346    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:17.504356    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:17.504360    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:17.504364    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:17.504369    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:19.506369    4727 main.go:141] libmachine: Attempt 6
	I0718 20:36:19.506395    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:19.506467    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:19.506476    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:19.506481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:19.506485    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:19.506490    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:21.508527    4727 main.go:141] libmachine: Attempt 7
	I0718 20:36:21.508554    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:21.508694    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:21.508708    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:21.508719    4727 main.go:141] libmachine: Found match: 6a:e3:ed:16:92:d5
	I0718 20:36:21.508730    4727 main.go:141] libmachine: IP: 192.168.105.5
	I0718 20:36:21.508735    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0718 20:36:22.527247    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:36:22.527480    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.527975    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.527990    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:36:22.610697    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:36:22.610726    4727 buildroot.go:166] provisioning hostname "ha-256000"
	I0718 20:36:22.610824    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.611097    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.611107    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000 && echo "ha-256000" | sudo tee /etc/hostname
	I0718 20:36:22.682492    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000
	
	I0718 20:36:22.682552    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.682702    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.682713    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:36:22.742479    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:36:22.742492    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:36:22.742500    4727 buildroot.go:174] setting up certificates
	I0718 20:36:22.742504    4727 provision.go:84] configureAuth start
	I0718 20:36:22.742508    4727 provision.go:143] copyHostCerts
	I0718 20:36:22.742542    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742586    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:36:22.742593    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742831    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:36:22.743010    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743030    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:36:22.743033    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743097    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:36:22.743184    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743212    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:36:22.743215    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743275    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:36:22.743373    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000 san=[127.0.0.1 192.168.105.5 ha-256000 localhost minikube]
	I0718 20:36:22.831924    4727 provision.go:177] copyRemoteCerts
	I0718 20:36:22.831953    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:36:22.831960    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:22.861471    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:36:22.861517    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:36:22.869576    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:36:22.869616    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0718 20:36:22.877642    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:36:22.877682    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 20:36:22.885597    4727 provision.go:87] duration metric: took 143.091583ms to configureAuth
	I0718 20:36:22.885605    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:36:22.885700    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:22.885731    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.885814    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.885819    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:36:22.939257    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:36:22.939268    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:36:22.939327    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:36:22.939382    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.939495    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.939529    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:36:22.999120    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:36:22.999176    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.999299    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.999307    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:36:24.399001    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:36:24.399014    4727 machine.go:97] duration metric: took 1.871786709s to provisionDockerMachine
	I0718 20:36:24.399020    4727 client.go:171] duration metric: took 17.130530167s to LocalClient.Create
	I0718 20:36:24.399035    4727 start.go:167] duration metric: took 17.130580916s to libmachine.API.Create "ha-256000"
	I0718 20:36:24.399041    4727 start.go:293] postStartSetup for "ha-256000" (driver="qemu2")
	I0718 20:36:24.399047    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:36:24.399133    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:36:24.399144    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.429882    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:36:24.431446    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:36:24.431458    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:36:24.431559    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:36:24.431674    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:36:24.431679    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:36:24.431800    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:36:24.434949    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:24.443099    4727 start.go:296] duration metric: took 44.054208ms for postStartSetup
	I0718 20:36:24.443547    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:24.443727    4727 start.go:128] duration metric: took 17.207737166s to createHost
	I0718 20:36:24.443753    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:24.443841    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:24.443845    4727 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0718 20:36:24.496185    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360184.183489336
	
	I0718 20:36:24.496191    4727 fix.go:216] guest clock: 1721360184.183489336
	I0718 20:36:24.496195    4727 fix.go:229] Guest: 2024-07-18 20:36:24.183489336 -0700 PDT Remote: 2024-07-18 20:36:24.44373 -0700 PDT m=+17.308254043 (delta=-260.240664ms)
	I0718 20:36:24.496206    4727 fix.go:200] guest clock delta is within tolerance: -260.240664ms
	I0718 20:36:24.496210    4727 start.go:83] releasing machines lock for "ha-256000", held for 17.260259709s
	I0718 20:36:24.496487    4727 ssh_runner.go:195] Run: cat /version.json
	I0718 20:36:24.496496    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.498161    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:36:24.498180    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.526501    4727 ssh_runner.go:195] Run: systemctl --version
	I0718 20:36:24.575612    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0718 20:36:24.577665    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:36:24.577696    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:36:24.584047    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:36:24.584056    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.584135    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.590860    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:36:24.594365    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:36:24.597804    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.597834    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:36:24.601501    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.605402    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:36:24.609279    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.613150    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:36:24.616783    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:36:24.620826    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:36:24.624868    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:36:24.628746    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:36:24.632406    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:36:24.635998    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:24.719937    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:36:24.727107    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.727172    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:36:24.734556    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.745145    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:36:24.752682    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.758405    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.763722    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:36:24.804424    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.810784    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.817505    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:36:24.818968    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:36:24.822004    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:36:24.827814    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:36:24.912234    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:36:24.993893    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.993951    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:36:25.000295    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:25.079893    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:27.267877    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.188026583s)
	I0718 20:36:27.267954    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:36:27.273388    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:36:27.280952    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.286424    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:36:27.376871    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:36:27.462186    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.546490    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:36:27.553023    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.558470    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.643444    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:36:27.668876    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:36:27.669018    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:36:27.671231    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:36:27.671271    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:36:27.672746    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:36:27.689183    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:36:27.689243    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.699313    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.710299    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:36:27.710436    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:36:27.711936    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:27.716497    4727 kubeadm.go:883] updating cluster {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0718 20:36:27.716547    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:27.716590    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:27.721193    4727 docker.go:685] Got preloaded images: 
	I0718 20:36:27.721201    4727 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0718 20:36:27.721249    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:27.725068    4727 ssh_runner.go:195] Run: which lz4
	I0718 20:36:27.726303    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0718 20:36:27.726385    4727 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0718 20:36:27.727841    4727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0718 20:36:27.727857    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335411903 bytes)
	I0718 20:36:29.032881    4727 docker.go:649] duration metric: took 1.306555792s to copy over tarball
	I0718 20:36:29.032945    4727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0718 20:36:30.077797    4727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.044866416s)
	I0718 20:36:30.077812    4727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0718 20:36:30.092929    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:30.096929    4727 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0718 20:36:30.102897    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:30.190133    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:32.408215    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.218126791s)
	I0718 20:36:32.408325    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:32.414564    4727 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 20:36:32.414576    4727 cache_images.go:84] Images are preloaded, skipping loading
	I0718 20:36:32.414588    4727 kubeadm.go:934] updating node { 192.168.105.5 8443 v1.30.3 docker true true} ...
	I0718 20:36:32.414662    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:36:32.414717    4727 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 20:36:32.422967    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:32.422975    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:32.422989    4727 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0718 20:36:32.423001    4727 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-256000 NodeName:ha-256000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 20:36:32.423064    4727 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-256000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 20:36:32.423074    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:36:32.423127    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:36:32.430238    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:36:32.430293    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:36:32.430329    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:36:32.433734    4727 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 20:36:32.433764    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0718 20:36:32.437628    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0718 20:36:32.443760    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:36:32.449483    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0718 20:36:32.455815    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0718 20:36:32.461759    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:36:32.463168    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:32.467182    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:32.556522    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:36:32.567007    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.5
	I0718 20:36:32.567019    4727 certs.go:194] generating shared ca certs ...
	I0718 20:36:32.567029    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.567195    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:36:32.567242    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:36:32.567249    4727 certs.go:256] generating profile certs ...
	I0718 20:36:32.567287    4727 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:36:32.567299    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt with IP's: []
	I0718 20:36:32.629331    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt ...
	I0718 20:36:32.629341    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt: {Name:mkc9c3e562115edef8b85e012e81a3eb4a2cf75a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key ...
	I0718 20:36:32.629649    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key: {Name:mkb41caa35d055a2dcb04d364862addacfff33bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629781    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4
	I0718 20:36:32.629789    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.254]
	I0718 20:36:32.695617    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 ...
	I0718 20:36:32.695626    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4: {Name:mkee89910ca1db08ac083863b0e4a027ae270203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696056    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 ...
	I0718 20:36:32.696061    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4: {Name:mk8365902b4e9f071c9404629a4b35cc6ca6ebbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696198    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:36:32.696306    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:36:32.696557    4727 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:36:32.696565    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt with IP's: []
	I0718 20:36:32.762976    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt ...
	I0718 20:36:32.762980    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt: {Name:mkb3e0281e7ef362624ad24bb17cfb244b9bc171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763112    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key ...
	I0718 20:36:32.763115    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key: {Name:mkc06a04ddb3616913d2c6f5647bad25fef6f42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763224    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:36:32.763237    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:36:32.763247    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:36:32.763257    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:36:32.763268    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:36:32.763279    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:36:32.763290    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:36:32.763301    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:36:32.763382    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:36:32.763410    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:36:32.763415    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:36:32.763434    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:36:32.763451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:36:32.763468    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:36:32.763505    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:32.763524    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.763535    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.763546    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.763807    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:36:32.773281    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:36:32.781447    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:36:32.789770    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:36:32.798040    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0718 20:36:32.806232    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:36:32.814458    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:36:32.822522    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:36:32.830515    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:36:32.838566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:36:32.846581    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:36:32.854568    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 20:36:32.860769    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:36:32.863035    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:36:32.867352    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868859    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868879    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.870984    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:36:32.874504    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:36:32.878096    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879659    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879678    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.881640    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:36:32.885559    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:36:32.889461    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891114    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891133    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.893171    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:36:32.897112    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:36:32.898621    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:36:32.898660    4727 kubeadm.go:392] StartCluster: {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clus
terName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:32.898726    4727 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 20:36:32.903849    4727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 20:36:32.907545    4727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 20:36:32.910740    4727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 20:36:32.914021    4727 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 20:36:32.914030    4727 kubeadm.go:157] found existing configuration files:
	
	I0718 20:36:32.914050    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0718 20:36:32.917254    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 20:36:32.917277    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 20:36:32.920874    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0718 20:36:32.924549    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 20:36:32.924574    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 20:36:32.928189    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.931542    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 20:36:32.931572    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.934804    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0718 20:36:32.937825    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 20:36:32.937847    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 20:36:32.941208    4727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0718 20:36:32.964473    4727 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0718 20:36:32.964502    4727 kubeadm.go:310] [preflight] Running pre-flight checks
	I0718 20:36:33.010272    4727 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 20:36:33.010346    4727 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 20:36:33.010394    4727 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 20:36:33.080896    4727 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 20:36:33.088116    4727 out.go:204]   - Generating certificates and keys ...
	I0718 20:36:33.088149    4727 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0718 20:36:33.088180    4727 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0718 20:36:33.187618    4727 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0718 20:36:33.225765    4727 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0718 20:36:33.439485    4727 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0718 20:36:33.599214    4727 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0718 20:36:33.681357    4727 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0718 20:36:33.681418    4727 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.726840    4727 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0718 20:36:33.726901    4727 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.875169    4727 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0718 20:36:34.071575    4727 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0718 20:36:34.163748    4727 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0718 20:36:34.163778    4727 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 20:36:34.260583    4727 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 20:36:34.352375    4727 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0718 20:36:34.395125    4727 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 20:36:34.512349    4727 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 20:36:34.655223    4727 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 20:36:34.655381    4727 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 20:36:34.656483    4727 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 20:36:34.666848    4727 out.go:204]   - Booting up control plane ...
	I0718 20:36:34.666901    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 20:36:34.666950    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 20:36:34.666982    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 20:36:34.667031    4727 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 20:36:34.667081    4727 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 20:36:34.667103    4727 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0718 20:36:34.759306    4727 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0718 20:36:34.759350    4727 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0718 20:36:35.263383    4727 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.7975ms
	I0718 20:36:35.263624    4727 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0718 20:36:38.766721    4727 kubeadm.go:310] [api-check] The API server is healthy after 3.504642043s
	I0718 20:36:38.772139    4727 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 20:36:38.775784    4727 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 20:36:38.782114    4727 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0718 20:36:38.782191    4727 kubeadm.go:310] [mark-control-plane] Marking the node ha-256000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 20:36:38.784595    4727 kubeadm.go:310] [bootstrap-token] Using token: yv8fsh.sh51yi31jewcw15j
	I0718 20:36:38.788784    4727 out.go:204]   - Configuring RBAC rules ...
	I0718 20:36:38.788835    4727 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 20:36:38.790051    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 20:36:38.796261    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 20:36:38.797188    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 20:36:38.797986    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 20:36:38.798957    4727 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 20:36:39.169725    4727 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 20:36:39.576005    4727 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0718 20:36:40.169284    4727 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0718 20:36:40.169608    4727 kubeadm.go:310] 
	I0718 20:36:40.169641    4727 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0718 20:36:40.169646    4727 kubeadm.go:310] 
	I0718 20:36:40.169692    4727 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0718 20:36:40.169695    4727 kubeadm.go:310] 
	I0718 20:36:40.169709    4727 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0718 20:36:40.169760    4727 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 20:36:40.169794    4727 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 20:36:40.169797    4727 kubeadm.go:310] 
	I0718 20:36:40.169826    4727 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0718 20:36:40.169830    4727 kubeadm.go:310] 
	I0718 20:36:40.169856    4727 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 20:36:40.169858    4727 kubeadm.go:310] 
	I0718 20:36:40.169883    4727 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0718 20:36:40.169938    4727 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 20:36:40.169984    4727 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 20:36:40.169987    4727 kubeadm.go:310] 
	I0718 20:36:40.170044    4727 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0718 20:36:40.170090    4727 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0718 20:36:40.170093    4727 kubeadm.go:310] 
	I0718 20:36:40.170134    4727 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170222    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc \
	I0718 20:36:40.170234    4727 kubeadm.go:310] 	--control-plane 
	I0718 20:36:40.170242    4727 kubeadm.go:310] 
	I0718 20:36:40.170285    4727 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0718 20:36:40.170299    4727 kubeadm.go:310] 
	I0718 20:36:40.170351    4727 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170426    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc 
	I0718 20:36:40.170492    4727 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 20:36:40.170502    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:40.170507    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:40.176555    4727 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0718 20:36:40.183616    4727 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0718 20:36:40.185686    4727 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0718 20:36:40.185696    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0718 20:36:40.191764    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0718 20:36:40.332259    4727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 20:36:40.332307    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.332337    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000 minikube.k8s.io/updated_at=2024_07_18T20_36_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=true
	I0718 20:36:40.385331    4727 ops.go:34] apiserver oom_adj: -16
	I0718 20:36:40.385383    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.887435    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.387480    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.887395    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.387370    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.885756    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.387374    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.886101    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.386656    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.887355    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.387330    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.887331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.386668    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.886398    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.385335    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.887237    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.387224    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.887271    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.387175    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.885647    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.387168    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.887214    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.387158    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.887129    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.387127    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.887088    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.387119    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.885301    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.387061    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.453749    4727 kubeadm.go:1113] duration metric: took 14.12187225s to wait for elevateKubeSystemPrivileges
	I0718 20:36:54.453766    4727 kubeadm.go:394] duration metric: took 21.55570275s to StartCluster
	I0718 20:36:54.453776    4727 settings.go:142] acquiring lock: {Name:mk9577e2a46ebc5e017130011eb528f9fea1ed10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.453868    4727 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.454239    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.454483    4727 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.454492    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:36:54.454494    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0718 20:36:54.454496    4727 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 20:36:54.454530    4727 addons.go:69] Setting storage-provisioner=true in profile "ha-256000"
	I0718 20:36:54.454533    4727 addons.go:69] Setting default-storageclass=true in profile "ha-256000"
	I0718 20:36:54.454543    4727 addons.go:234] Setting addon storage-provisioner=true in "ha-256000"
	I0718 20:36:54.454546    4727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-256000"
	I0718 20:36:54.454554    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.454722    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.455342    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.455486    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 20:36:54.455762    4727 cert_rotation.go:137] Starting client certificate rotation controller
	I0718 20:36:54.455811    4727 addons.go:234] Setting addon default-storageclass=true in "ha-256000"
	I0718 20:36:54.455823    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.460675    4727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 20:36:54.464747    4727 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.464758    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0718 20:36:54.464769    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.465436    4727 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.465440    4727 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0718 20:36:54.465444    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.511774    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.706626    4727 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0718 20:36:54.777305    4727 round_trippers.go:463] GET https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0718 20:36:54.777314    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.777318    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.777321    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.782732    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:36:54.783013    4727 round_trippers.go:463] PUT https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0718 20:36:54.783019    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.783023    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.783026    4727 round_trippers.go:473]     Content-Type: application/json
	I0718 20:36:54.783028    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.784014    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:36:54.792272    4727 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0718 20:36:54.793579    4727 addons.go:510] duration metric: took 339.092083ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0718 20:36:54.793593    4727 start.go:246] waiting for cluster config update ...
	I0718 20:36:54.793600    4727 start.go:255] writing updated cluster config ...
	I0718 20:36:54.798143    4727 out.go:177] 
	I0718 20:36:54.802340    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.802369    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.805206    4727 out.go:177] * Starting "ha-256000-m02" control-plane node in "ha-256000" cluster
	I0718 20:36:54.813295    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:54.813304    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:54.813383    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:54.813389    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:54.813425    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.813828    4727 start.go:360] acquireMachinesLock for ha-256000-m02: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:54.813863    4727 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "ha-256000-m02"
	I0718 20:36:54.813872    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:tr
ue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.813899    4727 start.go:125] createHost starting for "m02" (driver="qemu2")
	I0718 20:36:54.818236    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:54.833731    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:54.833754    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:54.833854    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:54.833891    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833898    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.833936    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:54.833959    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833965    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.834273    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:54.991167    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:55.074302    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:55.074313    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:55.074505    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.084177    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.084198    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.084247    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2 +20000M
	I0718 20:36:55.092640    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:55.092655    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.092668    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.092672    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:55.092685    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:55.092723    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e8:07:38:73:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.131373    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.131397    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.131401    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:55.131414    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:55.131476    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:55.131491    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:55.131496    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:55.131509    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:55.131515    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:55.131521    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:57.132241    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:57.132260    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:57.132370    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:57.132380    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:57.132387    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:57.132391    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:57.132399    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:57.132403    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:59.134429    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:59.134514    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:59.134610    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:59.134633    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:59.134640    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:59.134645    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:59.134650    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:59.134655    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:01.136704    4727 main.go:141] libmachine: Attempt 3
	I0718 20:37:01.136730    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:01.136864    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:01.136874    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:01.136879    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:01.136892    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:01.136897    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:01.136902    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:03.139087    4727 main.go:141] libmachine: Attempt 4
	I0718 20:37:03.139131    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:03.139262    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:03.139278    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:03.139286    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:03.139290    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:03.139295    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:03.139305    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:05.141342    4727 main.go:141] libmachine: Attempt 5
	I0718 20:37:05.141371    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:05.141487    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:05.141499    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:05.141504    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:05.141508    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:05.141513    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:05.141518    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:07.141729    4727 main.go:141] libmachine: Attempt 6
	I0718 20:37:07.141760    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:07.141844    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:07.141853    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:07.141858    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:07.141862    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:07.141866    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:07.141871    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:09.143893    4727 main.go:141] libmachine: Attempt 7
	I0718 20:37:09.143910    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:09.143997    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:37:09.144009    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:37:09.144011    4727 main.go:141] libmachine: Found match: 5a:e8:7:38:73:30
	I0718 20:37:09.144020    4727 main.go:141] libmachine: IP: 192.168.105.6
	I0718 20:37:09.144023    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0718 20:37:22.173394    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:37:22.173460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.173824    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.173832    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:37:22.224366    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:37:22.224379    4727 buildroot.go:166] provisioning hostname "ha-256000-m02"
	I0718 20:37:22.224437    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.224569    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.224574    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m02 && echo "ha-256000-m02" | sudo tee /etc/hostname
	I0718 20:37:22.281136    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m02
	
	I0718 20:37:22.281193    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.281326    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.281333    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:37:22.335405    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:37:22.335420    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:37:22.335427    4727 buildroot.go:174] setting up certificates
	I0718 20:37:22.335432    4727 provision.go:84] configureAuth start
	I0718 20:37:22.335436    4727 provision.go:143] copyHostCerts
	I0718 20:37:22.335460    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335499    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:37:22.335504    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335625    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:37:22.335755    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335793    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:37:22.335798    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335849    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:37:22.335937    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.335958    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:37:22.335961    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.336009    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:37:22.336098    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m02 san=[127.0.0.1 192.168.105.6 ha-256000-m02 localhost minikube]
	I0718 20:37:22.416839    4727 provision.go:177] copyRemoteCerts
	I0718 20:37:22.417292    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:37:22.417307    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:22.446250    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:37:22.446323    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:37:22.455193    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:37:22.455243    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:37:22.463182    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:37:22.463217    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:37:22.471841    4727 provision.go:87] duration metric: took 136.406375ms to configureAuth
	I0718 20:37:22.471860    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:37:22.472154    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:22.472192    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.472306    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.472312    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:37:22.520570    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:37:22.520580    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:37:22.520661    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:37:22.520720    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.520835    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.520884    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:37:22.573905    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:37:22.573954    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.574074    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.574082    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:37:23.946918    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:37:23.946932    4727 machine.go:97] duration metric: took 1.773574458s to provisionDockerMachine
	I0718 20:37:23.946948    4727 client.go:171] duration metric: took 29.113993584s to LocalClient.Create
	I0718 20:37:23.946964    4727 start.go:167] duration metric: took 29.114041166s to libmachine.API.Create "ha-256000"
	I0718 20:37:23.946968    4727 start.go:293] postStartSetup for "ha-256000-m02" (driver="qemu2")
	I0718 20:37:23.946975    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:37:23.947049    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:37:23.947059    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:23.975789    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:37:23.977316    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:37:23.977325    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:37:23.977414    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:37:23.977531    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:37:23.977538    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:37:23.977667    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:37:23.981129    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:23.989836    4727 start.go:296] duration metric: took 42.86225ms for postStartSetup
	I0718 20:37:23.990279    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:37:23.990466    4727 start.go:128] duration metric: took 29.177367125s to createHost
	I0718 20:37:23.990492    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:23.990582    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:23.990587    4727 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0718 20:37:24.039991    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360244.056265969
	
	I0718 20:37:24.040003    4727 fix.go:216] guest clock: 1721360244.056265969
	I0718 20:37:24.040011    4727 fix.go:229] Guest: 2024-07-18 20:37:24.056265969 -0700 PDT Remote: 2024-07-18 20:37:23.990469 -0700 PDT m=+76.856635126 (delta=65.796969ms)
	I0718 20:37:24.040021    4727 fix.go:200] guest clock delta is within tolerance: 65.796969ms
	I0718 20:37:24.040027    4727 start.go:83] releasing machines lock for "ha-256000-m02", held for 29.226966s
	I0718 20:37:24.045188    4727 out.go:177] * Found network options:
	I0718 20:37:24.048256    4727 out.go:177]   - NO_PROXY=192.168.105.5
	W0718 20:37:24.052331    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:37:24.052639    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:37:24.052695    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:37:24.052702    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:24.052696    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:37:24.052803    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	W0718 20:37:24.080701    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:37:24.080760    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:37:24.120864    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:37:24.120877    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.120944    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.128913    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:37:24.133095    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:37:24.137320    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.137368    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:37:24.141513    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.145685    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:37:24.149674    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.153524    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:37:24.157504    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:37:24.161442    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:37:24.165217    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:37:24.169715    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:37:24.173504    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:37:24.177428    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.249585    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:37:24.258814    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.258889    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:37:24.266134    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.272789    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:37:24.282701    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.287831    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.293394    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:37:24.332150    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.338444    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.344970    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:37:24.346508    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:37:24.349662    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:37:24.355683    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:37:24.439008    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:37:24.522884    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.522913    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:37:24.529269    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.614408    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:37:26.705797    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.091426708s)
	I0718 20:37:26.705868    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:37:26.711797    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:37:26.719055    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.724747    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:37:26.813533    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:37:26.893596    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:26.965581    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:37:26.972962    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.978785    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:27.061213    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:37:27.087585    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:37:27.087659    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:37:27.091046    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:37:27.091097    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:37:27.092542    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:37:27.112215    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:37:27.112278    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.124950    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.136592    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:37:27.145555    4727 out.go:177]   - env NO_PROXY=192.168.105.5
	I0718 20:37:27.149713    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:37:27.151201    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:27.155414    4727 mustload.go:65] Loading cluster: ha-256000
	I0718 20:37:27.155551    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:27.156066    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:27.156157    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.6
	I0718 20:37:27.156161    4727 certs.go:194] generating shared ca certs ...
	I0718 20:37:27.156167    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.156269    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:37:27.156316    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:37:27.156321    4727 certs.go:256] generating profile certs ...
	I0718 20:37:27.156387    4727 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:37:27.156400    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9
	I0718 20:37:27.156410    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.6 192.168.105.254]
	I0718 20:37:27.328161    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 ...
	I0718 20:37:27.328188    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9: {Name:mkff536dfdabd0cc9a693525dd142a97006d4485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 ...
	I0718 20:37:27.328655    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9: {Name:mkb963d77aed955311589ae3cd9371dca3b50bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328816    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:37:27.328945    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:37:27.329100    4727 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:37:27.329110    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:37:27.329125    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:37:27.329137    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:37:27.329150    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:37:27.329162    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:37:27.329176    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:37:27.329186    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:37:27.329197    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:37:27.329271    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:37:27.329299    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:37:27.329305    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:37:27.329347    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:37:27.329372    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:37:27.329396    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:37:27.329451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:27.329478    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.329491    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.329501    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.329519    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:27.355925    4727 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0718 20:37:27.357647    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0718 20:37:27.362088    4727 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0718 20:37:27.363733    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0718 20:37:27.367759    4727 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0718 20:37:27.369261    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0718 20:37:27.373839    4727 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0718 20:37:27.375475    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0718 20:37:27.379174    4727 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0718 20:37:27.380628    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0718 20:37:27.384809    4727 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0718 20:37:27.386562    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0718 20:37:27.390606    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:37:27.399865    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:37:27.408308    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:37:27.416747    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:37:27.425050    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0718 20:37:27.433244    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:37:27.441306    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:37:27.449446    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:37:27.457566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:37:27.465676    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:37:27.473743    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:37:27.482174    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0718 20:37:27.487947    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0718 20:37:27.493902    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0718 20:37:27.499712    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0718 20:37:27.505265    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0718 20:37:27.511047    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0718 20:37:27.517340    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0718 20:37:27.523229    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:37:27.525438    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:37:27.529080    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530597    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530617    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.532775    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:37:27.536483    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:37:27.540031    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541631    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541649    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.543631    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:37:27.547571    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:37:27.551419    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553057    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553079    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.555162    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:37:27.559227    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:37:27.560725    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:37:27.560754    4727 kubeadm.go:934] updating node {m02 192.168.105.6 8443 v1.30.3 docker true true} ...
	I0718 20:37:27.560799    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:37:27.560814    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:37:27.560837    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:37:27.572539    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:37:27.572577    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:37:27.572623    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.576082    4727 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0718 20:37:27.576121    4727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm
	I0718 20:37:27.579785    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet
	I0718 20:37:34.561853    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.561928    4727 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.564073    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0718 20:37:34.564095    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (49938584 bytes)
	I0718 20:37:35.510887    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.510952    4727 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.512864    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0718 20:37:35.512884    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (48955544 bytes)
	I0718 20:37:42.606961    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:37:42.613080    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.613168    4727 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.614817    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0718 20:37:42.614833    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (96467384 bytes)
	I0718 20:37:43.119287    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0718 20:37:43.122637    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0718 20:37:43.128732    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:37:43.134516    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0718 20:37:43.141275    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:37:43.142606    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:43.146857    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:43.230113    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:37:43.243145    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:43.243333    4727 start.go:317] joinCluster: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluste
rName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:37:43.243382    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0718 20:37:43.243391    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:43.371073    4727 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:37:43.371092    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443"
	I0718 20:38:03.232381    4727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443": (19.861822375s)
	I0718 20:38:03.232396    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0718 20:38:03.485331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000-m02 minikube.k8s.io/updated_at=2024_07_18T20_38_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=false
	I0718 20:38:03.530961    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-256000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0718 20:38:03.578648    4727 start.go:319] duration metric: took 20.3358655s to joinCluster
	I0718 20:38:03.578688    4727 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:03.578898    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:03.583884    4727 out.go:177] * Verifying Kubernetes components...
	I0718 20:38:03.590972    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:03.702999    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:38:03.709797    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:38:03.709929    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0718 20:38:03.709957    4727 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.105.254:8443 with https://192.168.105.5:8443
	I0718 20:38:03.710058    4727 node_ready.go:35] waiting up to 6m0s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:03.710093    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:03.710097    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:03.710101    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:03.710109    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:03.716299    4727 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0718 20:38:04.212157    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.212175    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.212180    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.212182    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.217870    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:38:04.711681    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.711692    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.711696    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.711698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.713463    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.212138    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.212149    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.212153    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.212156    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.214175    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:05.711331    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.711345    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.711360    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.711363    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.712682    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.713155    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:06.210250    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.210264    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.210268    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.210271    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.212254    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:06.711235    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.711255    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.711260    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.711262    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.712940    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.212089    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.212100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.212104    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.212106    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.214317    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:07.712070    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.712079    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.712083    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.712086    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.713825    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.714102    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:08.211862    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.211878    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.211883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.211885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.213993    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:08.712062    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.712075    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.712079    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.712081    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.713753    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.212027    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.212036    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.212052    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.212055    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.213833    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.712020    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.712029    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.712033    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.712035    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.713439    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.212016    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.212025    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.212029    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.212031    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.213662    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.213924    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:10.711085    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.711100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.711114    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.711117    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.712848    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.211980    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.211995    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.211999    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.212002    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.213760    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.711981    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.711994    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.712005    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.712008    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.713435    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.211955    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.211969    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.211974    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.211976    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.213759    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.214202    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:12.711912    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.711929    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.711933    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.711935    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.713382    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.211920    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.211932    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.211941    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.211943    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.213828    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.711194    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.711206    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.711209    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.711211    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.712757    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:14.211901    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.211919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.211924    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.211932    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.213956    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:14.214285    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:14.711860    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.711876    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.711883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.711885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.713170    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.211895    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.211907    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.211911    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.211913    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.213693    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.711835    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.711849    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.711863    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.711865    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.713487    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.211839    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.211844    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.211846    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.213365    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.711659    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.711669    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.711673    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.711675    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.713252    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.713433    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:17.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.211830    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.211834    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.211836    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.213413    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:17.711756    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.711781    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.711785    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.711788    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.713341    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.211779    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.211794    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.211798    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.211800    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.213551    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.711749    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.711759    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.711764    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.711766    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.713325    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.713645    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:19.211738    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.211750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.211754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.211756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.213507    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:19.711717    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.711731    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.711734    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.711736    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.713476    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.211230    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.211271    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.211314    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.211318    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.212922    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.710773    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.710783    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.710787    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.710790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.712163    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.211705    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.211717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.211738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.211742    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.213362    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.213898    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:21.711683    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.711698    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.711702    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.711704    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.713411    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.211928    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.211938    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.211942    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.211944    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.214292    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.214473    4727 node_ready.go:49] node "ha-256000-m02" has status "Ready":"True"
	I0718 20:38:22.214479    4727 node_ready.go:38] duration metric: took 18.50492425s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:22.214483    4727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:22.214513    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:22.214523    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.214528    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.214533    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.216823    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.221656    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.221688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gl7wn
	I0718 20:38:22.221691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.221695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.221698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.223037    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.223438    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.223443    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.223447    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.223449    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.224627    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.224906    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.224912    4727 pod_ready.go:81] duration metric: took 3.247917ms for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224916    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224935    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t5fk7
	I0718 20:38:22.224937    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.224950    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.224954    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.226106    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.226400    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.226404    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.226411    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.226414    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.227526    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.227886    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.227891    4727 pod_ready.go:81] duration metric: took 2.972458ms for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227894    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227913    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000
	I0718 20:38:22.227919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.227923    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.227925    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.228991    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.229395    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.229399    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.229402    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.229406    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.230465    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.230693    4727 pod_ready.go:92] pod "etcd-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.230699    4727 pod_ready.go:81] duration metric: took 2.801916ms for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230703    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230720    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000-m02
	I0718 20:38:22.230723    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.230726    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.230728    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.231834    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.232263    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.232268    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.232271    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.232273    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.233360    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.233783    4727 pod_ready.go:92] pod "etcd-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.233789    4727 pod_ready.go:81] duration metric: took 3.083416ms for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.233794    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.413762    4727 request.go:629] Waited for 179.941666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413824    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413828    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.413841    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.413846    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.415462    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.613785    4727 request.go:629] Waited for 197.877917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613838    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613844    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.613847    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.613849    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.616581    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.616806    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.616814    4727 pod_ready.go:81] duration metric: took 383.02725ms for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.616819    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.813743    4727 request.go:629] Waited for 196.894708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813781    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813784    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.813788    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.813790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.815511    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.012375    4727 request.go:629] Waited for 196.496584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012418    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012422    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.012426    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.012428    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.014100    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.014297    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.014304    4727 pod_ready.go:81] duration metric: took 397.4915ms for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.014308    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.213728    4727 request.go:629] Waited for 199.392916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213764    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213767    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.213771    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.213774    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.215292    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.412016    4727 request.go:629] Waited for 196.230667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412048    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412050    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.412055    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.412057    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.414117    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.414317    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.414324    4727 pod_ready.go:81] duration metric: took 400.022917ms for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.414329    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.613726    4727 request.go:629] Waited for 199.367083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613754    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613757    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.613760    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.613763    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.615829    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.813718    4727 request.go:629] Waited for 197.566667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813747    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.813754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.813756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.815391    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.815670    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.815679    4727 pod_ready.go:81] duration metric: took 401.357791ms for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.815685    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.013744    4727 request.go:629] Waited for 198.028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013777    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013780    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.013783    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.013785    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.015358    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.213717    4727 request.go:629] Waited for 197.87625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213750    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213772    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.213776    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.213779    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.215177    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.215486    4727 pod_ready.go:92] pod "kube-proxy-99sn4" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.215494    4727 pod_ready.go:81] duration metric: took 399.816291ms for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.215499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.412543    4727 request.go:629] Waited for 197.022333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412572    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412576    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.412580    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.412582    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.414200    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.613688    4727 request.go:629] Waited for 199.188292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613723    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613734    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.613738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.613740    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.616115    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:24.616487    4727 pod_ready.go:92] pod "kube-proxy-jxnv9" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.616495    4727 pod_ready.go:81] duration metric: took 401.003958ms for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.616499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.811999    4727 request.go:629] Waited for 195.4745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812037    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812040    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.812044    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.812046    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.813599    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.013712    4727 request.go:629] Waited for 199.880375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013743    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013746    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.013750    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.013752    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.015408    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.015677    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.015685    4727 pod_ready.go:81] duration metric: took 399.1935ms for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.015689    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.213690    4727 request.go:629] Waited for 197.964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213729    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213735    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.213739    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.213741    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.215582    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.413674    4727 request.go:629] Waited for 197.841584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413700    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413702    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.413714    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.413717    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.415433    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.415627    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.415633    4727 pod_ready.go:81] duration metric: took 399.951542ms for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.415638    4727 pod_ready.go:38] duration metric: took 3.201238458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:25.415647    4727 api_server.go:52] waiting for apiserver process to appear ...
	I0718 20:38:25.415719    4727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:38:25.421413    4727 api_server.go:72] duration metric: took 21.843316333s to wait for apiserver process to appear ...
	I0718 20:38:25.421422    4727 api_server.go:88] waiting for apiserver healthz status ...
	I0718 20:38:25.421429    4727 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0718 20:38:25.424174    4727 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0718 20:38:25.424198    4727 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0718 20:38:25.424200    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.424204    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.424207    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.424682    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:38:25.424723    4727 api_server.go:141] control plane version: v1.30.3
	I0718 20:38:25.424729    4727 api_server.go:131] duration metric: took 3.305084ms to wait for apiserver health ...
	I0718 20:38:25.424732    4727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0718 20:38:25.613673    4727 request.go:629] Waited for 188.916583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613714    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.613721    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.613723    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.616608    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:25.620463    4727 system_pods.go:59] 17 kube-system pods found
	I0718 20:38:25.620472    4727 system_pods.go:61] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:25.620475    4727 system_pods.go:61] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:25.620477    4727 system_pods.go:61] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:25.620479    4727 system_pods.go:61] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:25.620480    4727 system_pods.go:61] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:25.620482    4727 system_pods.go:61] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:25.620484    4727 system_pods.go:61] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:25.620486    4727 system_pods.go:61] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:25.620488    4727 system_pods.go:61] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:25.620490    4727 system_pods.go:61] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:25.620492    4727 system_pods.go:61] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:25.620493    4727 system_pods.go:61] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:25.620495    4727 system_pods.go:61] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:25.620497    4727 system_pods.go:61] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:25.620498    4727 system_pods.go:61] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:25.620500    4727 system_pods.go:61] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:25.620502    4727 system_pods.go:61] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:25.620505    4727 system_pods.go:74] duration metric: took 195.775375ms to wait for pod list to return data ...
	I0718 20:38:25.620509    4727 default_sa.go:34] waiting for default service account to be created ...
	I0718 20:38:25.813683    4727 request.go:629] Waited for 193.137584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813709    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813712    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.813716    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.813721    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.815354    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.815466    4727 default_sa.go:45] found service account: "default"
	I0718 20:38:25.815474    4727 default_sa.go:55] duration metric: took 194.966875ms for default service account to be created ...
	I0718 20:38:25.815479    4727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0718 20:38:26.013652    4727 request.go:629] Waited for 198.147166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.013695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.013702    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.016448    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:26.020596    4727 system_pods.go:86] 17 kube-system pods found
	I0718 20:38:26.020604    4727 system_pods.go:89] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:26.020607    4727 system_pods.go:89] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:26.020609    4727 system_pods.go:89] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:26.020611    4727 system_pods.go:89] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:26.020613    4727 system_pods.go:89] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:26.020615    4727 system_pods.go:89] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:26.020617    4727 system_pods.go:89] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:26.020619    4727 system_pods.go:89] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:26.020621    4727 system_pods.go:89] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:26.020622    4727 system_pods.go:89] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:26.020624    4727 system_pods.go:89] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:26.020626    4727 system_pods.go:89] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:26.020628    4727 system_pods.go:89] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:26.020629    4727 system_pods.go:89] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:26.020631    4727 system_pods.go:89] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:26.020633    4727 system_pods.go:89] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:26.020635    4727 system_pods.go:89] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:26.020641    4727 system_pods.go:126] duration metric: took 205.165291ms to wait for k8s-apps to be running ...
	I0718 20:38:26.020645    4727 system_svc.go:44] waiting for kubelet service to be running ....
	I0718 20:38:26.020720    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:38:26.027026    4727 system_svc.go:56] duration metric: took 6.37875ms WaitForService to wait for kubelet
	I0718 20:38:26.027036    4727 kubeadm.go:582] duration metric: took 22.448955791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:38:26.027047    4727 node_conditions.go:102] verifying NodePressure condition ...
	I0718 20:38:26.213670    4727 request.go:629] Waited for 186.592667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213748    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213751    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.213756    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.213758    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.215369    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:26.215702    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215710    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215716    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215719    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215721    4727 node_conditions.go:105] duration metric: took 188.677125ms to run NodePressure ...
	I0718 20:38:26.215733    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:38:26.215747    4727 start.go:255] writing updated cluster config ...
	I0718 20:38:26.221138    4727 out.go:177] 
	I0718 20:38:26.225195    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:26.225251    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.230070    4727 out.go:177] * Starting "ha-256000-m03" control-plane node in "ha-256000" cluster
	I0718 20:38:26.238085    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:38:26.238092    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:38:26.238177    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:38:26.238184    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:38:26.238226    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.238529    4727 start.go:360] acquireMachinesLock for ha-256000-m03: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:38:26.238563    4727 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "ha-256000-m03"
	I0718 20:38:26.238573    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:26.238613    4727 start.go:125] createHost starting for "m03" (driver="qemu2")
	I0718 20:38:26.243026    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:38:26.268172    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:38:26.268206    4727 client.go:168] LocalClient.Create starting
	I0718 20:38:26.268290    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:38:26.268328    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268338    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268376    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:38:26.268399    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268406    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268691    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:38:26.426584    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:38:26.572781    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:38:26.572789    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:38:26.573022    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.588299    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.588321    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.588408    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2 +20000M
	I0718 20:38:26.597072    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:38:26.597089    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.597102    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.597113    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:38:26.597129    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:38:26.597163    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:7f:0e:0c:6d:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.641473    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.641500    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.641504    4727 main.go:141] libmachine: Attempt 0
	I0718 20:38:26.641520    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:26.641735    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:26.641749    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:26.641756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:26.641761    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:26.641765    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:26.641770    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:26.641776    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:28.643878    4727 main.go:141] libmachine: Attempt 1
	I0718 20:38:28.643913    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:28.644011    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:28.644023    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:28.644028    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:28.644032    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:28.644036    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:28.644046    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:28.644052    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:30.646081    4727 main.go:141] libmachine: Attempt 2
	I0718 20:38:30.646120    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:30.646235    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:30.646244    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:30.646250    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:30.646254    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:30.646258    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:30.646262    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:30.646267    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:32.648349    4727 main.go:141] libmachine: Attempt 3
	I0718 20:38:32.648374    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:32.648466    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:32.648477    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:32.648481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:32.648486    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:32.648497    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:32.648501    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:32.648514    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:34.650548    4727 main.go:141] libmachine: Attempt 4
	I0718 20:38:34.650566    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:34.650664    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:34.650674    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:34.650678    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:34.650682    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:34.650686    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:34.650692    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:34.650696    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:36.652758    4727 main.go:141] libmachine: Attempt 5
	I0718 20:38:36.652796    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:36.652971    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:36.652995    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:36.653008    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:36.653088    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:36.653108    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:36.653113    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:36.653119    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:38.654089    4727 main.go:141] libmachine: Attempt 6
	I0718 20:38:38.654205    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:38.654304    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:38.654315    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:38.654320    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:38.654329    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:38.654333    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:38.654338    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:38.654343    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:40.656398    4727 main.go:141] libmachine: Attempt 7
	I0718 20:38:40.656425    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:40.656535    4727 main.go:141] libmachine: Found 7 entries in /var/db/dhcpd_leases!
	I0718 20:38:40.656552    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:d2:7f:e:c:6d:ba ID:1,d2:7f:e:c:6d:ba Lease:0x669b313f}
	I0718 20:38:40.656554    4727 main.go:141] libmachine: Found match: d2:7f:e:c:6d:ba
	I0718 20:38:40.656561    4727 main.go:141] libmachine: IP: 192.168.105.7
	I0718 20:38:40.656567    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.7)...
	I0718 20:38:49.679874    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:38:49.680098    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.680386    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.680393    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:38:49.720341    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:38:49.720352    4727 buildroot.go:166] provisioning hostname "ha-256000-m03"
	I0718 20:38:49.720396    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.720501    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.720507    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m03 && echo "ha-256000-m03" | sudo tee /etc/hostname
	I0718 20:38:49.765619    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m03
	
	I0718 20:38:49.765691    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.765821    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.765830    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:38:49.809445    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:38:49.809457    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:38:49.809463    4727 buildroot.go:174] setting up certificates
	I0718 20:38:49.809467    4727 provision.go:84] configureAuth start
	I0718 20:38:49.809471    4727 provision.go:143] copyHostCerts
	I0718 20:38:49.809497    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809560    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:38:49.809567    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809680    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:38:49.810515    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810551    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:38:49.810554    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810618    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:38:49.810856    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810884    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:38:49.810888    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810942    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:38:49.811128    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m03 san=[127.0.0.1 192.168.105.7 ha-256000-m03 localhost minikube]
	I0718 20:38:49.892392    4727 provision.go:177] copyRemoteCerts
	I0718 20:38:49.892426    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:38:49.892435    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:49.917004    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:38:49.917069    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:38:49.925760    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:38:49.925809    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:38:49.934495    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:38:49.934547    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:38:49.944465    4727 provision.go:87] duration metric: took 134.994083ms to configureAuth
	I0718 20:38:49.944477    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:38:49.946418    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:49.946460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.946554    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.946559    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:38:49.988863    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:38:49.988874    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:38:49.988957    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:38:49.989005    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.989117    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.989151    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	Environment="NO_PROXY=192.168.105.5,192.168.105.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:38:50.033434    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	Environment=NO_PROXY=192.168.105.5,192.168.105.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:38:50.033494    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:50.033609    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:50.033618    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:38:51.357934    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:38:51.357948    4727 machine.go:97] duration metric: took 1.678110291s to provisionDockerMachine
	I0718 20:38:51.357955    4727 client.go:171] duration metric: took 25.090436s to LocalClient.Create
	I0718 20:38:51.357970    4727 start.go:167] duration metric: took 25.090492834s to libmachine.API.Create "ha-256000"
	I0718 20:38:51.357987    4727 start.go:293] postStartSetup for "ha-256000-m03" (driver="qemu2")
	I0718 20:38:51.357993    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:38:51.358064    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:38:51.358075    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.383362    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:38:51.385220    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:38:51.385229    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:38:51.385339    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:38:51.385460    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:38:51.385466    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:38:51.385589    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:38:51.389076    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:38:51.397667    4727 start.go:296] duration metric: took 39.676333ms for postStartSetup
	I0718 20:38:51.398148    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:51.398353    4727 start.go:128] duration metric: took 25.1604295s to createHost
	I0718 20:38:51.398381    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:51.398475    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:51.398479    4727 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0718 20:38:51.443684    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360331.726119547
	
	I0718 20:38:51.443697    4727 fix.go:216] guest clock: 1721360331.726119547
	I0718 20:38:51.443701    4727 fix.go:229] Guest: 2024-07-18 20:38:51.726119547 -0700 PDT Remote: 2024-07-18 20:38:51.39836 -0700 PDT m=+164.266937085 (delta=327.759547ms)
	I0718 20:38:51.443713    4727 fix.go:200] guest clock delta is within tolerance: 327.759547ms
	I0718 20:38:51.443716    4727 start.go:83] releasing machines lock for "ha-256000-m03", held for 25.205843709s
	I0718 20:38:51.447883    4727 out.go:177] * Found network options:
	I0718 20:38:51.451892    4727 out.go:177]   - NO_PROXY=192.168.105.5,192.168.105.6
	W0718 20:38:51.455815    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.455829    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456208    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456223    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:38:51.456298    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:38:51.456327    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	W0718 20:38:51.479804    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:38:51.479862    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:38:51.524774    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:38:51.524786    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.524847    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.531855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:38:51.535855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:38:51.539545    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.539580    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:38:51.543520    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.547437    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:38:51.551284    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.555870    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:38:51.559926    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:38:51.563772    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:38:51.567972    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:38:51.572324    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:38:51.576791    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:38:51.580307    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.641726    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:38:51.654538    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.654606    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:38:51.661500    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.671940    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:38:51.683005    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.689286    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.694846    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:38:51.739658    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.745604    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.752465    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:38:51.754039    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:38:51.757754    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:38:51.764400    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:38:51.833658    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:38:51.901993    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.902021    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:38:51.910153    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.983567    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:39:53.221259    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.239360917s)
	I0718 20:39:53.221338    4727 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0718 20:39:53.233907    4727 out.go:177] 
	W0718 20:39:53.237861    4727 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:38:50 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531478880Z" level=info msg="Starting up"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531868672Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.532448547Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=532
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.550167964Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560007672Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560035005Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560063505Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560074839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560111130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560123547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560217922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560230922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560237130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560241589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560270464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560366505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561097130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561114380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561185047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561197839Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561245172Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561280130Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563923422Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563946005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563952880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563959547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563972505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564012380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564132589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564175464Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564185714Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564191797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564197839Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564204005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564210464Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564216297Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564222297Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564228089Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564233922Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564239422Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564256255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564264589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564270589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564276339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564281380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564287547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564292755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564298214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564303922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564310047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564315047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564320255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564325630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564332547Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564341589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564346797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564352089Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564402380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564416755Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564421630Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564427380Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564432047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564437755Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564467089Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564611964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564632964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564646839Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564655005Z" level=info msg="containerd successfully booted in 0.014823s"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.553636672Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.561497047Z" level=info msg="Loading containers: start."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.589775631Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.620757631Z" level=info msg="Loading containers: done."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624562881Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624599339Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:38:51 ha-256000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641454297Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641495839Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:38:52 ha-256000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.265389656Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266153693Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266192011Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266216137Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266284865Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:53 ha-256000-m03 dockerd[931]: time="2024-07-19T03:38:53.282812481Z" level=info msg="Starting up"
	Jul 19 03:39:53 ha-256000-m03 dockerd[931]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:38:50 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531478880Z" level=info msg="Starting up"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531868672Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.532448547Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=532
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.550167964Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560007672Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560035005Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560063505Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560074839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560111130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560123547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560217922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560230922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560237130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560241589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560270464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560366505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561097130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561114380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561185047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561197839Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561245172Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561280130Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563923422Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563946005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563952880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563959547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563972505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564012380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564132589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564175464Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564185714Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564191797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564197839Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564204005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564210464Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564216297Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564222297Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564228089Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564233922Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564239422Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564256255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564264589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564270589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564276339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564281380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564287547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564292755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564298214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564303922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564310047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564315047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564320255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564325630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564332547Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564341589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564346797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564352089Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564402380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564416755Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564421630Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564427380Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564432047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564437755Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564467089Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564611964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564632964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564646839Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564655005Z" level=info msg="containerd successfully booted in 0.014823s"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.553636672Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.561497047Z" level=info msg="Loading containers: start."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.589775631Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.620757631Z" level=info msg="Loading containers: done."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624562881Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624599339Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:38:51 ha-256000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641454297Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641495839Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:38:52 ha-256000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.265389656Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266153693Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266192011Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266216137Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266284865Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:53 ha-256000-m03 dockerd[931]: time="2024-07-19T03:38:53.282812481Z" level=info msg="Starting up"
	Jul 19 03:39:53 ha-256000-m03 dockerd[931]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0718 20:39:53.237915    4727 out.go:239] * 
	* 
	W0718 20:39:53.239556    4727 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 20:39:53.244752    4727 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-256000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ha-256000 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                                        Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-020000                                                                                               | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|                | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2093368901/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh            | functional-020000 ssh findmnt                                                                                      | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| mount          | -p functional-020000                                                                                               | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|                | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2093368901/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh            | functional-020000 ssh findmnt                                                                                      | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-020000 ssh findmnt                                                                                      | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|                | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-020000 ssh findmnt                                                                                      | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-020000 ssh findmnt                                                                                      | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|                | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-020000 ssh findmnt                                                                                      | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:35 PDT |
	|                | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| mount          | -p functional-020000                                                                                               | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|                | --kill=true                                                                                                        |                   |         |         |                     |                     |
	| start          | -p functional-020000                                                                                               | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|                | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start          | -p functional-020000 --dry-run                                                                                     | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start          | -p functional-020000                                                                                               | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT |                     |
	|                | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                                                                 | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:35 PDT | 18 Jul 24 20:36 PDT |
	|                | -p functional-020000                                                                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-020000                                                                                                  | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-020000                                                                                                  | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-020000                                                                                                  | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| image          | functional-020000                                                                                                  | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	|                | image ls --format short                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-020000                                                                                                  | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	|                | image ls --format yaml                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-020000                                                                                                  | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	|                | image ls --format json                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-020000                                                                                                  | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	|                | image ls --format table                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| ssh            | functional-020000 ssh pgrep                                                                                        | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT |                     |
	|                | buildkitd                                                                                                          |                   |         |         |                     |                     |
	| image          | functional-020000 image build -t                                                                                   | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	|                | localhost/my-image:functional-020000                                                                               |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                                                   |                   |         |         |                     |                     |
	| image          | functional-020000 image ls                                                                                         | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	| delete         | -p functional-020000                                                                                               | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	| start          | -p ha-256000 --wait=true                                                                                           | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT |                     |
	|                | --memory=2200 --ha                                                                                                 |                   |         |         |                     |                     |
	|                | -v=7 --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:36:07
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:36:07.154539    4727 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:36:07.154652    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154655    4727 out.go:304] Setting ErrFile to fd 2...
	I0718 20:36:07.154657    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154787    4727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:36:07.155777    4727 out.go:298] Setting JSON to false
	I0718 20:36:07.172062    4727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2135,"bootTime":1721358032,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:36:07.172136    4727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:36:07.175769    4727 out.go:177] * [ha-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 20:36:07.182867    4727 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:36:07.182897    4727 notify.go:220] Checking for updates...
	I0718 20:36:07.188814    4727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:07.191895    4727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:36:07.192950    4727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:36:07.195871    4727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 20:36:07.198897    4727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:36:07.202011    4727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:36:07.205826    4727 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 20:36:07.212869    4727 start.go:297] selected driver: qemu2
	I0718 20:36:07.212875    4727 start.go:901] validating driver "qemu2" against <nil>
	I0718 20:36:07.212880    4727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:36:07.215027    4727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:36:07.217921    4727 out.go:177] * Automatically selected the socket_vmnet network
	I0718 20:36:07.220933    4727 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:36:07.220960    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:07.220968    4727 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0718 20:36:07.220971    4727 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0718 20:36:07.220995    4727 start.go:340] cluster config:
	{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:07.224405    4727 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:36:07.231878    4727 out.go:177] * Starting "ha-256000" primary control-plane node in "ha-256000" cluster
	I0718 20:36:07.235849    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:07.235880    4727 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 20:36:07.235892    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:07.235960    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:07.235965    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:07.236167    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:07.236181    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json: {Name:mk4f96c33b167a65b92bd4e48e5f1a3c7a52bbe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:07.236387    4727 start.go:360] acquireMachinesLock for ha-256000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:07.236422    4727 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "ha-256000"
	I0718 20:36:07.236432    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:07.236461    4727 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 20:36:07.243901    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:07.268930    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:07.268958    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:07.269026    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:07.269056    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269065    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269104    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:07.269127    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269136    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269466    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:07.395393    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:07.434010    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:07.434014    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:07.434195    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.445169    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.445186    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.445241    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2 +20000M
	I0718 20:36:07.453205    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:07.453220    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.453236    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.453239    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:07.453248    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:07.453278    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:e3:ed:16:92:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.491921    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.491947    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.491951    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:07.491963    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:07.492029    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:07.492048    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:07.492054    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:07.492061    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:07.492067    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:09.494175    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:09.494254    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:09.494618    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:09.494729    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:09.494764    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:09.494789    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:09.494817    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:11.496994    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:11.497242    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:11.497663    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:11.497717    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:11.497756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:11.497787    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:11.497819    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:13.500006    4727 main.go:141] libmachine: Attempt 3
	I0718 20:36:13.500080    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:13.500185    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:13.500200    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:13.500205    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:13.500210    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:13.500216    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:15.502208    4727 main.go:141] libmachine: Attempt 4
	I0718 20:36:15.502220    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:15.502255    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:15.502275    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:15.502280    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:15.502285    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:15.502290    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:17.504286    4727 main.go:141] libmachine: Attempt 5
	I0718 20:36:17.504293    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:17.504346    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:17.504356    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:17.504360    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:17.504364    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:17.504369    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:19.506369    4727 main.go:141] libmachine: Attempt 6
	I0718 20:36:19.506395    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:19.506467    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:19.506476    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:19.506481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:19.506485    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:19.506490    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:21.508527    4727 main.go:141] libmachine: Attempt 7
	I0718 20:36:21.508554    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:21.508694    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:21.508708    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:21.508719    4727 main.go:141] libmachine: Found match: 6a:e3:ed:16:92:d5
	I0718 20:36:21.508730    4727 main.go:141] libmachine: IP: 192.168.105.5
	I0718 20:36:21.508735    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0718 20:36:22.527247    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:36:22.527480    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.527975    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.527990    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:36:22.610697    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:36:22.610726    4727 buildroot.go:166] provisioning hostname "ha-256000"
	I0718 20:36:22.610824    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.611097    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.611107    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000 && echo "ha-256000" | sudo tee /etc/hostname
	I0718 20:36:22.682492    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000
	
	I0718 20:36:22.682552    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.682702    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.682713    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:36:22.742479    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:36:22.742492    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:36:22.742500    4727 buildroot.go:174] setting up certificates
	I0718 20:36:22.742504    4727 provision.go:84] configureAuth start
	I0718 20:36:22.742508    4727 provision.go:143] copyHostCerts
	I0718 20:36:22.742542    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742586    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:36:22.742593    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742831    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:36:22.743010    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743030    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:36:22.743033    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743097    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:36:22.743184    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743212    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:36:22.743215    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743275    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:36:22.743373    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000 san=[127.0.0.1 192.168.105.5 ha-256000 localhost minikube]
	I0718 20:36:22.831924    4727 provision.go:177] copyRemoteCerts
	I0718 20:36:22.831953    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:36:22.831960    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:22.861471    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:36:22.861517    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:36:22.869576    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:36:22.869616    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0718 20:36:22.877642    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:36:22.877682    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 20:36:22.885597    4727 provision.go:87] duration metric: took 143.091583ms to configureAuth
	I0718 20:36:22.885605    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:36:22.885700    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:22.885731    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.885814    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.885819    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:36:22.939257    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:36:22.939268    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:36:22.939327    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:36:22.939382    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.939495    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.939529    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:36:22.999120    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:36:22.999176    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.999299    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.999307    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:36:24.399001    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:36:24.399014    4727 machine.go:97] duration metric: took 1.871786709s to provisionDockerMachine
	I0718 20:36:24.399020    4727 client.go:171] duration metric: took 17.130530167s to LocalClient.Create
	I0718 20:36:24.399035    4727 start.go:167] duration metric: took 17.130580916s to libmachine.API.Create "ha-256000"
	I0718 20:36:24.399041    4727 start.go:293] postStartSetup for "ha-256000" (driver="qemu2")
	I0718 20:36:24.399047    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:36:24.399133    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:36:24.399144    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.429882    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:36:24.431446    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:36:24.431458    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:36:24.431559    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:36:24.431674    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:36:24.431679    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:36:24.431800    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:36:24.434949    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:24.443099    4727 start.go:296] duration metric: took 44.054208ms for postStartSetup
	I0718 20:36:24.443547    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:24.443727    4727 start.go:128] duration metric: took 17.207737166s to createHost
	I0718 20:36:24.443753    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:24.443841    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:24.443845    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:36:24.496185    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360184.183489336
	
	I0718 20:36:24.496191    4727 fix.go:216] guest clock: 1721360184.183489336
	I0718 20:36:24.496195    4727 fix.go:229] Guest: 2024-07-18 20:36:24.183489336 -0700 PDT Remote: 2024-07-18 20:36:24.44373 -0700 PDT m=+17.308254043 (delta=-260.240664ms)
	I0718 20:36:24.496206    4727 fix.go:200] guest clock delta is within tolerance: -260.240664ms
	I0718 20:36:24.496210    4727 start.go:83] releasing machines lock for "ha-256000", held for 17.260259709s
	I0718 20:36:24.496487    4727 ssh_runner.go:195] Run: cat /version.json
	I0718 20:36:24.496496    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.498161    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:36:24.498180    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.526501    4727 ssh_runner.go:195] Run: systemctl --version
	I0718 20:36:24.575612    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0718 20:36:24.577665    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:36:24.577696    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:36:24.584047    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:36:24.584056    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.584135    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.590860    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:36:24.594365    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:36:24.597804    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.597834    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:36:24.601501    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.605402    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:36:24.609279    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.613150    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:36:24.616783    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:36:24.620826    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:36:24.624868    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:36:24.628746    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:36:24.632406    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:36:24.635998    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:24.719937    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:36:24.727107    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.727172    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:36:24.734556    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.745145    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:36:24.752682    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.758405    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.763722    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:36:24.804424    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.810784    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.817505    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:36:24.818968    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:36:24.822004    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:36:24.827814    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:36:24.912234    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:36:24.993893    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.993951    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:36:25.000295    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:25.079893    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:27.267877    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.188026583s)
	I0718 20:36:27.267954    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:36:27.273388    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:36:27.280952    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.286424    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:36:27.376871    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:36:27.462186    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.546490    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:36:27.553023    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.558470    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.643444    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:36:27.668876    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:36:27.669018    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:36:27.671231    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:36:27.671271    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:36:27.672746    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:36:27.689183    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:36:27.689243    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.699313    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.710299    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:36:27.710436    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:36:27.711936    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:27.716497    4727 kubeadm.go:883] updating cluster {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0718 20:36:27.716547    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:27.716590    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:27.721193    4727 docker.go:685] Got preloaded images: 
	I0718 20:36:27.721201    4727 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0718 20:36:27.721249    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:27.725068    4727 ssh_runner.go:195] Run: which lz4
	I0718 20:36:27.726303    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0718 20:36:27.726385    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0718 20:36:27.727841    4727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0718 20:36:27.727857    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335411903 bytes)
	I0718 20:36:29.032881    4727 docker.go:649] duration metric: took 1.306555792s to copy over tarball
	I0718 20:36:29.032945    4727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0718 20:36:30.077797    4727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.044866416s)
	I0718 20:36:30.077812    4727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0718 20:36:30.092929    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:30.096929    4727 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0718 20:36:30.102897    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:30.190133    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:32.408215    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.218126791s)
	I0718 20:36:32.408325    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:32.414564    4727 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 20:36:32.414576    4727 cache_images.go:84] Images are preloaded, skipping loading
	I0718 20:36:32.414588    4727 kubeadm.go:934] updating node { 192.168.105.5 8443 v1.30.3 docker true true} ...
	I0718 20:36:32.414662    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:36:32.414717    4727 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 20:36:32.422967    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:32.422975    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:32.422989    4727 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0718 20:36:32.423001    4727 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-256000 NodeName:ha-256000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 20:36:32.423064    4727 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-256000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 20:36:32.423074    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:36:32.423127    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:36:32.430238    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:36:32.430293    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:36:32.430329    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:36:32.433734    4727 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 20:36:32.433764    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0718 20:36:32.437628    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0718 20:36:32.443760    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:36:32.449483    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0718 20:36:32.455815    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0718 20:36:32.461759    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:36:32.463168    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:32.467182    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:32.556522    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:36:32.567007    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.5
	I0718 20:36:32.567019    4727 certs.go:194] generating shared ca certs ...
	I0718 20:36:32.567029    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.567195    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:36:32.567242    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:36:32.567249    4727 certs.go:256] generating profile certs ...
	I0718 20:36:32.567287    4727 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:36:32.567299    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt with IP's: []
	I0718 20:36:32.629331    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt ...
	I0718 20:36:32.629341    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt: {Name:mkc9c3e562115edef8b85e012e81a3eb4a2cf75a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key ...
	I0718 20:36:32.629649    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key: {Name:mkb41caa35d055a2dcb04d364862addacfff33bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629781    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4
	I0718 20:36:32.629789    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.254]
	I0718 20:36:32.695617    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 ...
	I0718 20:36:32.695626    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4: {Name:mkee89910ca1db08ac083863b0e4a027ae270203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696056    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 ...
	I0718 20:36:32.696061    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4: {Name:mk8365902b4e9f071c9404629a4b35cc6ca6ebbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696198    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:36:32.696306    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:36:32.696557    4727 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:36:32.696565    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt with IP's: []
	I0718 20:36:32.762976    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt ...
	I0718 20:36:32.762980    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt: {Name:mkb3e0281e7ef362624ad24bb17cfb244b9bc171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763112    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key ...
	I0718 20:36:32.763115    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key: {Name:mkc06a04ddb3616913d2c6f5647bad25fef6f42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763224    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:36:32.763237    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:36:32.763247    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:36:32.763257    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:36:32.763268    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:36:32.763279    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:36:32.763290    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:36:32.763301    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:36:32.763382    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:36:32.763410    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:36:32.763415    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:36:32.763434    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:36:32.763451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:36:32.763468    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:36:32.763505    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:32.763524    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.763535    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.763546    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.763807    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:36:32.773281    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:36:32.781447    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:36:32.789770    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:36:32.798040    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0718 20:36:32.806232    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:36:32.814458    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:36:32.822522    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:36:32.830515    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:36:32.838566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:36:32.846581    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:36:32.854568    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 20:36:32.860769    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:36:32.863035    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:36:32.867352    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868859    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868879    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.870984    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:36:32.874504    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:36:32.878096    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879659    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879678    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.881640    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:36:32.885559    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:36:32.889461    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891114    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891133    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.893171    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:36:32.897112    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:36:32.898621    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:36:32.898660    4727 kubeadm.go:392] StartCluster: {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clus
terName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:32.898726    4727 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 20:36:32.903849    4727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 20:36:32.907545    4727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 20:36:32.910740    4727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 20:36:32.914021    4727 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 20:36:32.914030    4727 kubeadm.go:157] found existing configuration files:
	
	I0718 20:36:32.914050    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0718 20:36:32.917254    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 20:36:32.917277    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 20:36:32.920874    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0718 20:36:32.924549    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 20:36:32.924574    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 20:36:32.928189    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.931542    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 20:36:32.931572    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.934804    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0718 20:36:32.937825    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 20:36:32.937847    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 20:36:32.941208    4727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0718 20:36:32.964473    4727 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0718 20:36:32.964502    4727 kubeadm.go:310] [preflight] Running pre-flight checks
	I0718 20:36:33.010272    4727 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 20:36:33.010346    4727 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 20:36:33.010394    4727 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 20:36:33.080896    4727 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 20:36:33.088116    4727 out.go:204]   - Generating certificates and keys ...
	I0718 20:36:33.088149    4727 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0718 20:36:33.088180    4727 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0718 20:36:33.187618    4727 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0718 20:36:33.225765    4727 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0718 20:36:33.439485    4727 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0718 20:36:33.599214    4727 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0718 20:36:33.681357    4727 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0718 20:36:33.681418    4727 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.726840    4727 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0718 20:36:33.726901    4727 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.875169    4727 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0718 20:36:34.071575    4727 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0718 20:36:34.163748    4727 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0718 20:36:34.163778    4727 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 20:36:34.260583    4727 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 20:36:34.352375    4727 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0718 20:36:34.395125    4727 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 20:36:34.512349    4727 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 20:36:34.655223    4727 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 20:36:34.655381    4727 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 20:36:34.656483    4727 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 20:36:34.666848    4727 out.go:204]   - Booting up control plane ...
	I0718 20:36:34.666901    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 20:36:34.666950    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 20:36:34.666982    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 20:36:34.667031    4727 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 20:36:34.667081    4727 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 20:36:34.667103    4727 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0718 20:36:34.759306    4727 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0718 20:36:34.759350    4727 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0718 20:36:35.263383    4727 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.7975ms
	I0718 20:36:35.263624    4727 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0718 20:36:38.766721    4727 kubeadm.go:310] [api-check] The API server is healthy after 3.504642043s
	I0718 20:36:38.772139    4727 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 20:36:38.775784    4727 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 20:36:38.782114    4727 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0718 20:36:38.782191    4727 kubeadm.go:310] [mark-control-plane] Marking the node ha-256000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 20:36:38.784595    4727 kubeadm.go:310] [bootstrap-token] Using token: yv8fsh.sh51yi31jewcw15j
	I0718 20:36:38.788784    4727 out.go:204]   - Configuring RBAC rules ...
	I0718 20:36:38.788835    4727 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 20:36:38.790051    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 20:36:38.796261    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 20:36:38.797188    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 20:36:38.797986    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 20:36:38.798957    4727 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 20:36:39.169725    4727 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 20:36:39.576005    4727 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0718 20:36:40.169284    4727 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0718 20:36:40.169608    4727 kubeadm.go:310] 
	I0718 20:36:40.169641    4727 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0718 20:36:40.169646    4727 kubeadm.go:310] 
	I0718 20:36:40.169692    4727 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0718 20:36:40.169695    4727 kubeadm.go:310] 
	I0718 20:36:40.169709    4727 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0718 20:36:40.169760    4727 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 20:36:40.169794    4727 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 20:36:40.169797    4727 kubeadm.go:310] 
	I0718 20:36:40.169826    4727 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0718 20:36:40.169830    4727 kubeadm.go:310] 
	I0718 20:36:40.169856    4727 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 20:36:40.169858    4727 kubeadm.go:310] 
	I0718 20:36:40.169883    4727 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0718 20:36:40.169938    4727 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 20:36:40.169984    4727 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 20:36:40.169987    4727 kubeadm.go:310] 
	I0718 20:36:40.170044    4727 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0718 20:36:40.170090    4727 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0718 20:36:40.170093    4727 kubeadm.go:310] 
	I0718 20:36:40.170134    4727 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170222    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc \
	I0718 20:36:40.170234    4727 kubeadm.go:310] 	--control-plane 
	I0718 20:36:40.170242    4727 kubeadm.go:310] 
	I0718 20:36:40.170285    4727 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0718 20:36:40.170299    4727 kubeadm.go:310] 
	I0718 20:36:40.170351    4727 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170426    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc 
	I0718 20:36:40.170492    4727 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 20:36:40.170502    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:40.170507    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:40.176555    4727 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0718 20:36:40.183616    4727 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0718 20:36:40.185686    4727 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0718 20:36:40.185696    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0718 20:36:40.191764    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0718 20:36:40.332259    4727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 20:36:40.332307    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.332337    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000 minikube.k8s.io/updated_at=2024_07_18T20_36_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=true
	I0718 20:36:40.385331    4727 ops.go:34] apiserver oom_adj: -16
	I0718 20:36:40.385383    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.887435    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.387480    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.887395    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.387370    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.885756    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.387374    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.886101    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.386656    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.887355    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.387330    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.887331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.386668    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.886398    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.385335    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.887237    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.387224    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.887271    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.387175    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.885647    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.387168    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.887214    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.387158    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.887129    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.387127    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.887088    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.387119    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.885301    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.387061    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.453749    4727 kubeadm.go:1113] duration metric: took 14.12187225s to wait for elevateKubeSystemPrivileges
	I0718 20:36:54.453766    4727 kubeadm.go:394] duration metric: took 21.55570275s to StartCluster
	I0718 20:36:54.453776    4727 settings.go:142] acquiring lock: {Name:mk9577e2a46ebc5e017130011eb528f9fea1ed10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.453868    4727 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.454239    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.454483    4727 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.454492    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:36:54.454494    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0718 20:36:54.454496    4727 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 20:36:54.454530    4727 addons.go:69] Setting storage-provisioner=true in profile "ha-256000"
	I0718 20:36:54.454533    4727 addons.go:69] Setting default-storageclass=true in profile "ha-256000"
	I0718 20:36:54.454543    4727 addons.go:234] Setting addon storage-provisioner=true in "ha-256000"
	I0718 20:36:54.454546    4727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-256000"
	I0718 20:36:54.454554    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.454722    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.455342    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.455486    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 20:36:54.455762    4727 cert_rotation.go:137] Starting client certificate rotation controller
	I0718 20:36:54.455811    4727 addons.go:234] Setting addon default-storageclass=true in "ha-256000"
	I0718 20:36:54.455823    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.460675    4727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 20:36:54.464747    4727 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.464758    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0718 20:36:54.464769    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.465436    4727 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.465440    4727 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0718 20:36:54.465444    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.511774    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.706626    4727 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0718 20:36:54.777305    4727 round_trippers.go:463] GET https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0718 20:36:54.777314    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.777318    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.777321    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.782732    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:36:54.783013    4727 round_trippers.go:463] PUT https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0718 20:36:54.783019    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.783023    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.783026    4727 round_trippers.go:473]     Content-Type: application/json
	I0718 20:36:54.783028    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.784014    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:36:54.792272    4727 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0718 20:36:54.793579    4727 addons.go:510] duration metric: took 339.092083ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0718 20:36:54.793593    4727 start.go:246] waiting for cluster config update ...
	I0718 20:36:54.793600    4727 start.go:255] writing updated cluster config ...
	I0718 20:36:54.798143    4727 out.go:177] 
	I0718 20:36:54.802340    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.802369    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.805206    4727 out.go:177] * Starting "ha-256000-m02" control-plane node in "ha-256000" cluster
	I0718 20:36:54.813295    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:54.813304    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:54.813383    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:54.813389    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:54.813425    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.813828    4727 start.go:360] acquireMachinesLock for ha-256000-m02: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:54.813863    4727 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "ha-256000-m02"
	I0718 20:36:54.813872    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:tr
ue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.813899    4727 start.go:125] createHost starting for "m02" (driver="qemu2")
	I0718 20:36:54.818236    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:54.833731    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:54.833754    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:54.833854    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:54.833891    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833898    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.833936    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:54.833959    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833965    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.834273    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:54.991167    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:55.074302    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:55.074313    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:55.074505    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.084177    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.084198    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.084247    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2 +20000M
	I0718 20:36:55.092640    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:55.092655    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.092668    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.092672    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:55.092685    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:55.092723    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e8:07:38:73:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.131373    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.131397    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.131401    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:55.131414    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:55.131476    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:55.131491    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:55.131496    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:55.131509    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:55.131515    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:55.131521    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:57.132241    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:57.132260    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:57.132370    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:57.132380    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:57.132387    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:57.132391    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:57.132399    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:57.132403    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:59.134429    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:59.134514    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:59.134610    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:59.134633    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:59.134640    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:59.134645    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:59.134650    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:59.134655    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:01.136704    4727 main.go:141] libmachine: Attempt 3
	I0718 20:37:01.136730    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:01.136864    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:01.136874    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:01.136879    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:01.136892    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:01.136897    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:01.136902    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:03.139087    4727 main.go:141] libmachine: Attempt 4
	I0718 20:37:03.139131    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:03.139262    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:03.139278    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:03.139286    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:03.139290    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:03.139295    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:03.139305    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:05.141342    4727 main.go:141] libmachine: Attempt 5
	I0718 20:37:05.141371    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:05.141487    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:05.141499    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:05.141504    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:05.141508    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:05.141513    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:05.141518    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:07.141729    4727 main.go:141] libmachine: Attempt 6
	I0718 20:37:07.141760    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:07.141844    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:07.141853    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:07.141858    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:07.141862    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:07.141866    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:07.141871    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:09.143893    4727 main.go:141] libmachine: Attempt 7
	I0718 20:37:09.143910    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:09.143997    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:37:09.144009    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:37:09.144011    4727 main.go:141] libmachine: Found match: 5a:e8:7:38:73:30
	I0718 20:37:09.144020    4727 main.go:141] libmachine: IP: 192.168.105.6
	I0718 20:37:09.144023    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0718 20:37:22.173394    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:37:22.173460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.173824    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.173832    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:37:22.224366    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:37:22.224379    4727 buildroot.go:166] provisioning hostname "ha-256000-m02"
	I0718 20:37:22.224437    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.224569    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.224574    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m02 && echo "ha-256000-m02" | sudo tee /etc/hostname
	I0718 20:37:22.281136    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m02
	
	I0718 20:37:22.281193    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.281326    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.281333    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:37:22.335405    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:37:22.335420    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:37:22.335427    4727 buildroot.go:174] setting up certificates
	I0718 20:37:22.335432    4727 provision.go:84] configureAuth start
	I0718 20:37:22.335436    4727 provision.go:143] copyHostCerts
	I0718 20:37:22.335460    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335499    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:37:22.335504    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335625    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:37:22.335755    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335793    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:37:22.335798    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335849    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:37:22.335937    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.335958    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:37:22.335961    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.336009    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:37:22.336098    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m02 san=[127.0.0.1 192.168.105.6 ha-256000-m02 localhost minikube]
	I0718 20:37:22.416839    4727 provision.go:177] copyRemoteCerts
	I0718 20:37:22.417292    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:37:22.417307    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:22.446250    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:37:22.446323    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:37:22.455193    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:37:22.455243    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:37:22.463182    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:37:22.463217    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:37:22.471841    4727 provision.go:87] duration metric: took 136.406375ms to configureAuth
	I0718 20:37:22.471860    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:37:22.472154    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:22.472192    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.472306    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.472312    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:37:22.520570    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:37:22.520580    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:37:22.520661    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:37:22.520720    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.520835    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.520884    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:37:22.573905    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:37:22.573954    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.574074    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.574082    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:37:23.946918    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:37:23.946932    4727 machine.go:97] duration metric: took 1.773574458s to provisionDockerMachine
	I0718 20:37:23.946948    4727 client.go:171] duration metric: took 29.113993584s to LocalClient.Create
	I0718 20:37:23.946964    4727 start.go:167] duration metric: took 29.114041166s to libmachine.API.Create "ha-256000"
	I0718 20:37:23.946968    4727 start.go:293] postStartSetup for "ha-256000-m02" (driver="qemu2")
	I0718 20:37:23.946975    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:37:23.947049    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:37:23.947059    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:23.975789    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:37:23.977316    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:37:23.977325    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:37:23.977414    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:37:23.977531    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:37:23.977538    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:37:23.977667    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:37:23.981129    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:23.989836    4727 start.go:296] duration metric: took 42.86225ms for postStartSetup
	I0718 20:37:23.990279    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:37:23.990466    4727 start.go:128] duration metric: took 29.177367125s to createHost
	I0718 20:37:23.990492    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:23.990582    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:23.990587    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:37:24.039991    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360244.056265969
	
	I0718 20:37:24.040003    4727 fix.go:216] guest clock: 1721360244.056265969
	I0718 20:37:24.040011    4727 fix.go:229] Guest: 2024-07-18 20:37:24.056265969 -0700 PDT Remote: 2024-07-18 20:37:23.990469 -0700 PDT m=+76.856635126 (delta=65.796969ms)
	I0718 20:37:24.040021    4727 fix.go:200] guest clock delta is within tolerance: 65.796969ms
	I0718 20:37:24.040027    4727 start.go:83] releasing machines lock for "ha-256000-m02", held for 29.226966s
	I0718 20:37:24.045188    4727 out.go:177] * Found network options:
	I0718 20:37:24.048256    4727 out.go:177]   - NO_PROXY=192.168.105.5
	W0718 20:37:24.052331    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:37:24.052639    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:37:24.052695    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:37:24.052702    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:24.052696    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:37:24.052803    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	W0718 20:37:24.080701    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:37:24.080760    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:37:24.120864    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:37:24.120877    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.120944    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.128913    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:37:24.133095    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:37:24.137320    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.137368    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:37:24.141513    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.145685    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:37:24.149674    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.153524    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:37:24.157504    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:37:24.161442    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:37:24.165217    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:37:24.169715    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:37:24.173504    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:37:24.177428    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.249585    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:37:24.258814    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.258889    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:37:24.266134    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.272789    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:37:24.282701    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.287831    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.293394    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:37:24.332150    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.338444    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.344970    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:37:24.346508    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:37:24.349662    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:37:24.355683    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:37:24.439008    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:37:24.522884    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.522913    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:37:24.529269    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.614408    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:37:26.705797    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.091426708s)
	I0718 20:37:26.705868    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:37:26.711797    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:37:26.719055    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.724747    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:37:26.813533    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:37:26.893596    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:26.965581    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:37:26.972962    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.978785    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:27.061213    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:37:27.087585    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:37:27.087659    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:37:27.091046    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:37:27.091097    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:37:27.092542    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:37:27.112215    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:37:27.112278    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.124950    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.136592    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:37:27.145555    4727 out.go:177]   - env NO_PROXY=192.168.105.5
	I0718 20:37:27.149713    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:37:27.151201    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:27.155414    4727 mustload.go:65] Loading cluster: ha-256000
	I0718 20:37:27.155551    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:27.156066    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:27.156157    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.6
	I0718 20:37:27.156161    4727 certs.go:194] generating shared ca certs ...
	I0718 20:37:27.156167    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.156269    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:37:27.156316    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:37:27.156321    4727 certs.go:256] generating profile certs ...
	I0718 20:37:27.156387    4727 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:37:27.156400    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9
	I0718 20:37:27.156410    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.6 192.168.105.254]
	I0718 20:37:27.328161    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 ...
	I0718 20:37:27.328188    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9: {Name:mkff536dfdabd0cc9a693525dd142a97006d4485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 ...
	I0718 20:37:27.328655    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9: {Name:mkb963d77aed955311589ae3cd9371dca3b50bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328816    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:37:27.328945    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:37:27.329100    4727 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:37:27.329110    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:37:27.329125    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:37:27.329137    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:37:27.329150    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:37:27.329162    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:37:27.329176    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:37:27.329186    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:37:27.329197    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:37:27.329271    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:37:27.329299    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:37:27.329305    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:37:27.329347    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:37:27.329372    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:37:27.329396    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:37:27.329451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:27.329478    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.329491    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.329501    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.329519    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:27.355925    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0718 20:37:27.357647    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0718 20:37:27.362088    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0718 20:37:27.363733    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0718 20:37:27.367759    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0718 20:37:27.369261    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0718 20:37:27.373839    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0718 20:37:27.375475    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0718 20:37:27.379174    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0718 20:37:27.380628    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0718 20:37:27.384809    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0718 20:37:27.386562    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0718 20:37:27.390606    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:37:27.399865    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:37:27.408308    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:37:27.416747    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:37:27.425050    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0718 20:37:27.433244    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:37:27.441306    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:37:27.449446    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:37:27.457566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:37:27.465676    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:37:27.473743    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:37:27.482174    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0718 20:37:27.487947    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0718 20:37:27.493902    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0718 20:37:27.499712    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0718 20:37:27.505265    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0718 20:37:27.511047    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0718 20:37:27.517340    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0718 20:37:27.523229    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:37:27.525438    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:37:27.529080    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530597    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530617    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.532775    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:37:27.536483    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:37:27.540031    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541631    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541649    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.543631    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:37:27.547571    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:37:27.551419    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553057    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553079    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.555162    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:37:27.559227    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:37:27.560725    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:37:27.560754    4727 kubeadm.go:934] updating node {m02 192.168.105.6 8443 v1.30.3 docker true true} ...
	I0718 20:37:27.560799    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:37:27.560814    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:37:27.560837    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:37:27.572539    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:37:27.572577    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:37:27.572623    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.576082    4727 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0718 20:37:27.576121    4727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm
	I0718 20:37:27.579785    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet
	I0718 20:37:34.561853    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.561928    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.564073    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0718 20:37:34.564095    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (49938584 bytes)
	I0718 20:37:35.510887    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.510952    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.512864    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0718 20:37:35.512884    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (48955544 bytes)
	I0718 20:37:42.606961    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:37:42.613080    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.613168    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.614817    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0718 20:37:42.614833    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (96467384 bytes)
	I0718 20:37:43.119287    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0718 20:37:43.122637    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0718 20:37:43.128732    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:37:43.134516    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0718 20:37:43.141275    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:37:43.142606    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:43.146857    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:43.230113    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:37:43.243145    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:43.243333    4727 start.go:317] joinCluster: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluste
rName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:37:43.243382    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0718 20:37:43.243391    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:43.371073    4727 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:37:43.371092    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443"
	I0718 20:38:03.232381    4727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443": (19.861822375s)
	I0718 20:38:03.232396    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0718 20:38:03.485331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000-m02 minikube.k8s.io/updated_at=2024_07_18T20_38_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=false
	I0718 20:38:03.530961    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-256000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0718 20:38:03.578648    4727 start.go:319] duration metric: took 20.3358655s to joinCluster
	I0718 20:38:03.578688    4727 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:03.578898    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:03.583884    4727 out.go:177] * Verifying Kubernetes components...
	I0718 20:38:03.590972    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:03.702999    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:38:03.709797    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:38:03.709929    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0718 20:38:03.709957    4727 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.105.254:8443 with https://192.168.105.5:8443
	I0718 20:38:03.710058    4727 node_ready.go:35] waiting up to 6m0s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:03.710093    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:03.710097    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:03.710101    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:03.710109    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:03.716299    4727 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0718 20:38:04.212157    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.212175    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.212180    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.212182    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.217870    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:38:04.711681    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.711692    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.711696    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.711698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.713463    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.212138    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.212149    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.212153    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.212156    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.214175    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:05.711331    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.711345    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.711360    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.711363    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.712682    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.713155    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:06.210250    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.210264    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.210268    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.210271    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.212254    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:06.711235    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.711255    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.711260    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.711262    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.712940    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.212089    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.212100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.212104    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.212106    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.214317    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:07.712070    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.712079    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.712083    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.712086    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.713825    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.714102    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:08.211862    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.211878    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.211883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.211885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.213993    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:08.712062    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.712075    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.712079    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.712081    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.713753    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.212027    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.212036    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.212052    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.212055    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.213833    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.712020    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.712029    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.712033    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.712035    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.713439    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.212016    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.212025    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.212029    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.212031    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.213662    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.213924    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:10.711085    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.711100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.711114    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.711117    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.712848    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.211980    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.211995    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.211999    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.212002    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.213760    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.711981    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.711994    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.712005    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.712008    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.713435    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.211955    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.211969    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.211974    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.211976    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.213759    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.214202    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:12.711912    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.711929    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.711933    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.711935    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.713382    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.211920    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.211932    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.211941    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.211943    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.213828    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.711194    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.711206    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.711209    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.711211    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.712757    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:14.211901    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.211919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.211924    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.211932    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.213956    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:14.214285    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:14.711860    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.711876    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.711883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.711885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.713170    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.211895    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.211907    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.211911    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.211913    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.213693    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.711835    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.711849    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.711863    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.711865    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.713487    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.211839    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.211844    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.211846    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.213365    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.711659    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.711669    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.711673    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.711675    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.713252    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.713433    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:17.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.211830    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.211834    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.211836    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.213413    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:17.711756    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.711781    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.711785    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.711788    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.713341    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.211779    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.211794    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.211798    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.211800    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.213551    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.711749    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.711759    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.711764    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.711766    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.713325    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.713645    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:19.211738    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.211750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.211754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.211756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.213507    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:19.711717    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.711731    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.711734    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.711736    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.713476    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.211230    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.211271    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.211314    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.211318    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.212922    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.710773    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.710783    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.710787    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.710790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.712163    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.211705    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.211717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.211738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.211742    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.213362    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.213898    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:21.711683    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.711698    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.711702    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.711704    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.713411    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.211928    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.211938    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.211942    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.211944    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.214292    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.214473    4727 node_ready.go:49] node "ha-256000-m02" has status "Ready":"True"
	I0718 20:38:22.214479    4727 node_ready.go:38] duration metric: took 18.50492425s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:22.214483    4727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:22.214513    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:22.214523    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.214528    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.214533    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.216823    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.221656    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.221688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gl7wn
	I0718 20:38:22.221691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.221695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.221698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.223037    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.223438    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.223443    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.223447    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.223449    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.224627    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.224906    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.224912    4727 pod_ready.go:81] duration metric: took 3.247917ms for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224916    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224935    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t5fk7
	I0718 20:38:22.224937    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.224950    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.224954    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.226106    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.226400    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.226404    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.226411    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.226414    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.227526    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.227886    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.227891    4727 pod_ready.go:81] duration metric: took 2.972458ms for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227894    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227913    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000
	I0718 20:38:22.227919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.227923    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.227925    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.228991    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.229395    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.229399    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.229402    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.229406    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.230465    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.230693    4727 pod_ready.go:92] pod "etcd-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.230699    4727 pod_ready.go:81] duration metric: took 2.801916ms for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230703    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230720    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000-m02
	I0718 20:38:22.230723    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.230726    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.230728    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.231834    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.232263    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.232268    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.232271    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.232273    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.233360    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.233783    4727 pod_ready.go:92] pod "etcd-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.233789    4727 pod_ready.go:81] duration metric: took 3.083416ms for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.233794    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.413762    4727 request.go:629] Waited for 179.941666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413824    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413828    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.413841    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.413846    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.415462    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.613785    4727 request.go:629] Waited for 197.877917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613838    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613844    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.613847    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.613849    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.616581    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.616806    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.616814    4727 pod_ready.go:81] duration metric: took 383.02725ms for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.616819    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.813743    4727 request.go:629] Waited for 196.894708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813781    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813784    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.813788    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.813790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.815511    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.012375    4727 request.go:629] Waited for 196.496584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012418    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012422    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.012426    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.012428    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.014100    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.014297    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.014304    4727 pod_ready.go:81] duration metric: took 397.4915ms for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.014308    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.213728    4727 request.go:629] Waited for 199.392916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213764    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213767    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.213771    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.213774    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.215292    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.412016    4727 request.go:629] Waited for 196.230667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412048    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412050    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.412055    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.412057    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.414117    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.414317    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.414324    4727 pod_ready.go:81] duration metric: took 400.022917ms for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.414329    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.613726    4727 request.go:629] Waited for 199.367083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613754    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613757    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.613760    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.613763    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.615829    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.813718    4727 request.go:629] Waited for 197.566667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813747    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.813754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.813756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.815391    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.815670    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.815679    4727 pod_ready.go:81] duration metric: took 401.357791ms for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.815685    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.013744    4727 request.go:629] Waited for 198.028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013777    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013780    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.013783    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.013785    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.015358    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.213717    4727 request.go:629] Waited for 197.87625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213750    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213772    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.213776    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.213779    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.215177    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.215486    4727 pod_ready.go:92] pod "kube-proxy-99sn4" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.215494    4727 pod_ready.go:81] duration metric: took 399.816291ms for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.215499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.412543    4727 request.go:629] Waited for 197.022333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412572    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412576    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.412580    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.412582    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.414200    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.613688    4727 request.go:629] Waited for 199.188292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613723    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613734    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.613738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.613740    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.616115    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:24.616487    4727 pod_ready.go:92] pod "kube-proxy-jxnv9" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.616495    4727 pod_ready.go:81] duration metric: took 401.003958ms for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.616499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.811999    4727 request.go:629] Waited for 195.4745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812037    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812040    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.812044    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.812046    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.813599    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.013712    4727 request.go:629] Waited for 199.880375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013743    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013746    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.013750    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.013752    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.015408    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.015677    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.015685    4727 pod_ready.go:81] duration metric: took 399.1935ms for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.015689    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.213690    4727 request.go:629] Waited for 197.964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213729    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213735    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.213739    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.213741    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.215582    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.413674    4727 request.go:629] Waited for 197.841584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413700    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413702    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.413714    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.413717    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.415433    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.415627    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.415633    4727 pod_ready.go:81] duration metric: took 399.951542ms for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.415638    4727 pod_ready.go:38] duration metric: took 3.201238458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:25.415647    4727 api_server.go:52] waiting for apiserver process to appear ...
	I0718 20:38:25.415719    4727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:38:25.421413    4727 api_server.go:72] duration metric: took 21.843316333s to wait for apiserver process to appear ...
	I0718 20:38:25.421422    4727 api_server.go:88] waiting for apiserver healthz status ...
	I0718 20:38:25.421429    4727 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0718 20:38:25.424174    4727 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0718 20:38:25.424198    4727 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0718 20:38:25.424200    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.424204    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.424207    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.424682    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:38:25.424723    4727 api_server.go:141] control plane version: v1.30.3
	I0718 20:38:25.424729    4727 api_server.go:131] duration metric: took 3.305084ms to wait for apiserver health ...
	I0718 20:38:25.424732    4727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0718 20:38:25.613673    4727 request.go:629] Waited for 188.916583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613714    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.613721    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.613723    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.616608    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:25.620463    4727 system_pods.go:59] 17 kube-system pods found
	I0718 20:38:25.620472    4727 system_pods.go:61] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:25.620475    4727 system_pods.go:61] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:25.620477    4727 system_pods.go:61] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:25.620479    4727 system_pods.go:61] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:25.620480    4727 system_pods.go:61] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:25.620482    4727 system_pods.go:61] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:25.620484    4727 system_pods.go:61] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:25.620486    4727 system_pods.go:61] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:25.620488    4727 system_pods.go:61] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:25.620490    4727 system_pods.go:61] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:25.620492    4727 system_pods.go:61] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:25.620493    4727 system_pods.go:61] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:25.620495    4727 system_pods.go:61] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:25.620497    4727 system_pods.go:61] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:25.620498    4727 system_pods.go:61] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:25.620500    4727 system_pods.go:61] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:25.620502    4727 system_pods.go:61] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:25.620505    4727 system_pods.go:74] duration metric: took 195.775375ms to wait for pod list to return data ...
	I0718 20:38:25.620509    4727 default_sa.go:34] waiting for default service account to be created ...
	I0718 20:38:25.813683    4727 request.go:629] Waited for 193.137584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813709    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813712    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.813716    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.813721    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.815354    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.815466    4727 default_sa.go:45] found service account: "default"
	I0718 20:38:25.815474    4727 default_sa.go:55] duration metric: took 194.966875ms for default service account to be created ...
	I0718 20:38:25.815479    4727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0718 20:38:26.013652    4727 request.go:629] Waited for 198.147166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.013695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.013702    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.016448    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:26.020596    4727 system_pods.go:86] 17 kube-system pods found
	I0718 20:38:26.020604    4727 system_pods.go:89] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:26.020607    4727 system_pods.go:89] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:26.020609    4727 system_pods.go:89] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:26.020611    4727 system_pods.go:89] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:26.020613    4727 system_pods.go:89] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:26.020615    4727 system_pods.go:89] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:26.020617    4727 system_pods.go:89] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:26.020619    4727 system_pods.go:89] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:26.020621    4727 system_pods.go:89] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:26.020622    4727 system_pods.go:89] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:26.020624    4727 system_pods.go:89] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:26.020626    4727 system_pods.go:89] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:26.020628    4727 system_pods.go:89] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:26.020629    4727 system_pods.go:89] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:26.020631    4727 system_pods.go:89] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:26.020633    4727 system_pods.go:89] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:26.020635    4727 system_pods.go:89] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:26.020641    4727 system_pods.go:126] duration metric: took 205.165291ms to wait for k8s-apps to be running ...
	I0718 20:38:26.020645    4727 system_svc.go:44] waiting for kubelet service to be running ....
	I0718 20:38:26.020720    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:38:26.027026    4727 system_svc.go:56] duration metric: took 6.37875ms WaitForService to wait for kubelet
	I0718 20:38:26.027036    4727 kubeadm.go:582] duration metric: took 22.448955791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:38:26.027047    4727 node_conditions.go:102] verifying NodePressure condition ...
	I0718 20:38:26.213670    4727 request.go:629] Waited for 186.592667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213748    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213751    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.213756    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.213758    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.215369    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:26.215702    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215710    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215716    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215719    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215721    4727 node_conditions.go:105] duration metric: took 188.677125ms to run NodePressure ...
	I0718 20:38:26.215733    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:38:26.215747    4727 start.go:255] writing updated cluster config ...
	I0718 20:38:26.221138    4727 out.go:177] 
	I0718 20:38:26.225195    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:26.225251    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.230070    4727 out.go:177] * Starting "ha-256000-m03" control-plane node in "ha-256000" cluster
	I0718 20:38:26.238085    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:38:26.238092    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:38:26.238177    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:38:26.238184    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:38:26.238226    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.238529    4727 start.go:360] acquireMachinesLock for ha-256000-m03: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:38:26.238563    4727 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "ha-256000-m03"
	I0718 20:38:26.238573    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:26.238613    4727 start.go:125] createHost starting for "m03" (driver="qemu2")
	I0718 20:38:26.243026    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:38:26.268172    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:38:26.268206    4727 client.go:168] LocalClient.Create starting
	I0718 20:38:26.268290    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:38:26.268328    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268338    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268376    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:38:26.268399    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268406    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268691    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:38:26.426584    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:38:26.572781    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:38:26.572789    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:38:26.573022    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.588299    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.588321    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.588408    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2 +20000M
	I0718 20:38:26.597072    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:38:26.597089    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.597102    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.597113    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:38:26.597129    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:38:26.597163    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:7f:0e:0c:6d:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.641473    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.641500    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.641504    4727 main.go:141] libmachine: Attempt 0
	I0718 20:38:26.641520    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:26.641735    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:26.641749    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:26.641756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:26.641761    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:26.641765    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:26.641770    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:26.641776    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:28.643878    4727 main.go:141] libmachine: Attempt 1
	I0718 20:38:28.643913    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:28.644011    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:28.644023    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:28.644028    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:28.644032    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:28.644036    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:28.644046    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:28.644052    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:30.646081    4727 main.go:141] libmachine: Attempt 2
	I0718 20:38:30.646120    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:30.646235    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:30.646244    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:30.646250    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:30.646254    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:30.646258    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:30.646262    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:30.646267    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:32.648349    4727 main.go:141] libmachine: Attempt 3
	I0718 20:38:32.648374    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:32.648466    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:32.648477    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:32.648481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:32.648486    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:32.648497    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:32.648501    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:32.648514    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:34.650548    4727 main.go:141] libmachine: Attempt 4
	I0718 20:38:34.650566    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:34.650664    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:34.650674    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:34.650678    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:34.650682    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:34.650686    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:34.650692    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:34.650696    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:36.652758    4727 main.go:141] libmachine: Attempt 5
	I0718 20:38:36.652796    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:36.652971    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:36.652995    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:36.653008    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:36.653088    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:36.653108    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:36.653113    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:36.653119    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:38.654089    4727 main.go:141] libmachine: Attempt 6
	I0718 20:38:38.654205    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:38.654304    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:38.654315    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:38.654320    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:38.654329    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:38.654333    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:38.654338    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:38.654343    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:40.656398    4727 main.go:141] libmachine: Attempt 7
	I0718 20:38:40.656425    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:40.656535    4727 main.go:141] libmachine: Found 7 entries in /var/db/dhcpd_leases!
	I0718 20:38:40.656552    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:d2:7f:e:c:6d:ba ID:1,d2:7f:e:c:6d:ba Lease:0x669b313f}
	I0718 20:38:40.656554    4727 main.go:141] libmachine: Found match: d2:7f:e:c:6d:ba
	I0718 20:38:40.656561    4727 main.go:141] libmachine: IP: 192.168.105.7
	I0718 20:38:40.656567    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.7)...
	I0718 20:38:49.679874    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:38:49.680098    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.680386    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.680393    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:38:49.720341    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:38:49.720352    4727 buildroot.go:166] provisioning hostname "ha-256000-m03"
	I0718 20:38:49.720396    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.720501    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.720507    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m03 && echo "ha-256000-m03" | sudo tee /etc/hostname
	I0718 20:38:49.765619    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m03
	
	I0718 20:38:49.765691    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.765821    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.765830    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:38:49.809445    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:38:49.809457    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:38:49.809463    4727 buildroot.go:174] setting up certificates
	I0718 20:38:49.809467    4727 provision.go:84] configureAuth start
	I0718 20:38:49.809471    4727 provision.go:143] copyHostCerts
	I0718 20:38:49.809497    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809560    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:38:49.809567    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809680    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:38:49.810515    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810551    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:38:49.810554    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810618    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:38:49.810856    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810884    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:38:49.810888    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810942    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:38:49.811128    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m03 san=[127.0.0.1 192.168.105.7 ha-256000-m03 localhost minikube]
	I0718 20:38:49.892392    4727 provision.go:177] copyRemoteCerts
	I0718 20:38:49.892426    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:38:49.892435    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:49.917004    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:38:49.917069    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:38:49.925760    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:38:49.925809    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:38:49.934495    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:38:49.934547    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:38:49.944465    4727 provision.go:87] duration metric: took 134.994083ms to configureAuth
	I0718 20:38:49.944477    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:38:49.946418    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:49.946460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.946554    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.946559    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:38:49.988863    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:38:49.988874    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:38:49.988957    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:38:49.989005    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.989117    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.989151    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	Environment="NO_PROXY=192.168.105.5,192.168.105.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:38:50.033434    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	Environment=NO_PROXY=192.168.105.5,192.168.105.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:38:50.033494    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:50.033609    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:50.033618    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:38:51.357934    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:38:51.357948    4727 machine.go:97] duration metric: took 1.678110291s to provisionDockerMachine
	I0718 20:38:51.357955    4727 client.go:171] duration metric: took 25.090436s to LocalClient.Create
	I0718 20:38:51.357970    4727 start.go:167] duration metric: took 25.090492834s to libmachine.API.Create "ha-256000"
	I0718 20:38:51.357987    4727 start.go:293] postStartSetup for "ha-256000-m03" (driver="qemu2")
	I0718 20:38:51.357993    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:38:51.358064    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:38:51.358075    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.383362    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:38:51.385220    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:38:51.385229    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:38:51.385339    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:38:51.385460    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:38:51.385466    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:38:51.385589    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:38:51.389076    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:38:51.397667    4727 start.go:296] duration metric: took 39.676333ms for postStartSetup
	I0718 20:38:51.398148    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:51.398353    4727 start.go:128] duration metric: took 25.1604295s to createHost
	I0718 20:38:51.398381    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:51.398475    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:51.398479    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:38:51.443684    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360331.726119547
	
	I0718 20:38:51.443697    4727 fix.go:216] guest clock: 1721360331.726119547
	I0718 20:38:51.443701    4727 fix.go:229] Guest: 2024-07-18 20:38:51.726119547 -0700 PDT Remote: 2024-07-18 20:38:51.39836 -0700 PDT m=+164.266937085 (delta=327.759547ms)
	I0718 20:38:51.443713    4727 fix.go:200] guest clock delta is within tolerance: 327.759547ms
	I0718 20:38:51.443716    4727 start.go:83] releasing machines lock for "ha-256000-m03", held for 25.205843709s
	I0718 20:38:51.447883    4727 out.go:177] * Found network options:
	I0718 20:38:51.451892    4727 out.go:177]   - NO_PROXY=192.168.105.5,192.168.105.6
	W0718 20:38:51.455815    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.455829    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456208    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456223    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:38:51.456298    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:38:51.456327    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	W0718 20:38:51.479804    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:38:51.479862    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:38:51.524774    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:38:51.524786    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.524847    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.531855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:38:51.535855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:38:51.539545    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.539580    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:38:51.543520    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.547437    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:38:51.551284    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.555870    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:38:51.559926    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:38:51.563772    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:38:51.567972    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:38:51.572324    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:38:51.576791    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:38:51.580307    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.641726    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:38:51.654538    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.654606    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:38:51.661500    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.671940    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:38:51.683005    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.689286    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.694846    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:38:51.739658    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.745604    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.752465    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:38:51.754039    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:38:51.757754    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:38:51.764400    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:38:51.833658    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:38:51.901993    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.902021    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:38:51.910153    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.983567    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:39:53.221259    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.239360917s)
	I0718 20:39:53.221338    4727 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0718 20:39:53.233907    4727 out.go:177] 
	W0718 20:39:53.237861    4727 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:38:50 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531478880Z" level=info msg="Starting up"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531868672Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.532448547Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=532
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.550167964Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560007672Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560035005Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560063505Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560074839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560111130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560123547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560217922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560230922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560237130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560241589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560270464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560366505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561097130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561114380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561185047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561197839Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561245172Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561280130Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563923422Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563946005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563952880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563959547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563972505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564012380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564132589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564175464Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564185714Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564191797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564197839Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564204005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564210464Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564216297Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564222297Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564228089Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564233922Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564239422Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564256255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564264589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564270589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564276339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564281380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564287547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564292755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564298214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564303922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564310047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564315047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564320255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564325630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564332547Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564341589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564346797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564352089Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564402380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564416755Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564421630Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564427380Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564432047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564437755Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564467089Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564611964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564632964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564646839Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564655005Z" level=info msg="containerd successfully booted in 0.014823s"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.553636672Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.561497047Z" level=info msg="Loading containers: start."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.589775631Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.620757631Z" level=info msg="Loading containers: done."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624562881Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624599339Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:38:51 ha-256000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641454297Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641495839Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:38:52 ha-256000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.265389656Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266153693Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266192011Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266216137Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266284865Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:53 ha-256000-m03 dockerd[931]: time="2024-07-19T03:38:53.282812481Z" level=info msg="Starting up"
	Jul 19 03:39:53 ha-256000-m03 dockerd[931]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0718 20:39:53.237915    4727 out.go:239] * 
	W0718 20:39:53.239556    4727 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 20:39:53.244752    4727 out.go:177] 
	
	
	==> Docker <==
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.739942440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.739977945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.746431204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.746615908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.746648201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.746747834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.748938334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.748991113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.749008427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.749072346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62c92a2e03424d74abec35244521f1b7761982d7dbb7311513fb13f822c225ed/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f20cc01dd922b82b1ee5c6472024624755b1340ebceab21cf25c6eacf6e19c4/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5db9ae745b118ebe428663f3f1c8c679cdc1a26cea72ee6016f951ae34fc28ea/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858940540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858976718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858984229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.859018904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.861914444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.861992224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.862003156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.862051518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889214398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889287171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889293388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889346507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6dfd469e7d36e       ba04bb24b9575                                                                                       2 minutes ago       Running             storage-provisioner       0                   5db9ae745b118       storage-provisioner
	1097379f4f6cb       2437cf7621777                                                                                       2 minutes ago       Running             coredns                   0                   62c92a2e03424       coredns-7db6d8ff4d-gl7wn
	9a1c088f8966e       2437cf7621777                                                                                       2 minutes ago       Running             coredns                   0                   5f20cc01dd922       coredns-7db6d8ff4d-t5fk7
	74fc7ee221313       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493            2 minutes ago       Running             kindnet-cni               0                   f7fb0ae46c979       kindnet-znvgn
	9103cd3e30ac5       2351f570ed0ea                                                                                       2 minutes ago       Running             kube-proxy                0                   dd4c5c6f3ce08       kube-proxy-jxnv9
	8128016ed9c34       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f   3 minutes ago       Running             kube-vip                  0                   e405a8655e904       kube-vip-ha-256000
	d5ff116ccff16       014faa467e297                                                                                       3 minutes ago       Running             etcd                      0                   1dd441769aa2a       etcd-ha-256000
	29f96bba40d3a       d48f992a22722                                                                                       3 minutes ago       Running             kube-scheduler            0                   aa59c4a58dba5       kube-scheduler-ha-256000
	70ffd55232c0b       8e97cdb19e7cc                                                                                       3 minutes ago       Running             kube-controller-manager   0                   96446dab38e98       kube-controller-manager-ha-256000
	dff4e67b66806       61773190d42ff                                                                                       3 minutes ago       Running             kube-apiserver            0                   877c87b7df476       kube-apiserver-ha-256000
	
	
	==> coredns [1097379f4f6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37765 - 42644 "HINFO IN 3312804127670044151.9315725327003923. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.009474143s
	
	
	==> coredns [9a1c088f8966] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42392 - 40278 "HINFO IN 2632545797447059373.9195703630793318012. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009665964s
	
	
	==> describe nodes <==
	Name:               ha-256000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_18T20_36_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:36:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:39:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:37:22 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:37:22 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:37:22 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:37:22 +0000   Fri, 19 Jul 2024 03:37:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    ha-256000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d710ce1e1896426084c421362e18dda0
	  System UUID:                d710ce1e1896426084c421362e18dda0
	  Boot ID:                    83486cc1-e7b0-4568-bb5a-c46474de14e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-gl7wn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m
	  kube-system                 coredns-7db6d8ff4d-t5fk7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m
	  kube-system                 etcd-ha-256000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m14s
	  kube-system                 kindnet-znvgn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m
	  kube-system                 kube-apiserver-ha-256000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 kube-controller-manager-ha-256000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 kube-proxy-jxnv9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 kube-scheduler-ha-256000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 kube-vip-ha-256000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m59s  kube-proxy       
	  Normal  Starting                 3m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m14s  kubelet          Node ha-256000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s  kubelet          Node ha-256000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s  kubelet          Node ha-256000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m     node-controller  Node ha-256000 event: Registered Node ha-256000 in Controller
	  Normal  NodeReady                2m31s  kubelet          Node ha-256000 status is now: NodeReady
	  Normal  RegisteredNode           96s    node-controller  Node ha-256000 event: Registered Node ha-256000 in Controller
	
	
	Name:               ha-256000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_18T20_38_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:38:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:39:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:38:32 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:38:32 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:38:32 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:38:32 +0000   Fri, 19 Jul 2024 03:38:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ha-256000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 b10ac96f2bdf4ee3ad1f9ba82eb39a4e
	  System UUID:                b10ac96f2bdf4ee3ad1f9ba82eb39a4e
	  Boot ID:                    b548924b-9c86-4ba2-9a9e-2e5cc7830327
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-256000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         108s
	  kube-system                 kindnet-2mvfm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      112s
	  kube-system                 kube-apiserver-ha-256000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-controller-manager-ha-256000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-99sn4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-scheduler-ha-256000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-vip-ha-256000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 109s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  112s (x8 over 112s)  kubelet          Node ha-256000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x8 over 112s)  kubelet          Node ha-256000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x7 over 112s)  kubelet          Node ha-256000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           110s                 node-controller  Node ha-256000-m02 event: Registered Node ha-256000-m02 in Controller
	  Normal  RegisteredNode           96s                  node-controller  Node ha-256000-m02 event: Registered Node ha-256000-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.650707] EINJ: EINJ table not found.
	[  +0.549800] systemd-fstab-generator[117]: Ignoring "noauto" option for root device
	[  +0.136927] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000360] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db
	[  +3.624626] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +0.080461] systemd-fstab-generator[508]: Ignoring "noauto" option for root device
	[  +0.034842] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.469016] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.194273] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.081032] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.086446] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +2.293076] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.088824] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.085311] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.095642] systemd-fstab-generator[1171]: Ignoring "noauto" option for root device
	[  +2.542348] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.036994] kauditd_printk_skb: 257 callbacks suppressed
	[  +2.330914] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +2.194691] systemd-fstab-generator[1695]: Ignoring "noauto" option for root device
	[  +0.779104] kauditd_printk_skb: 104 callbacks suppressed
	[  +3.727432] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[ +15.155229] kauditd_printk_skb: 62 callbacks suppressed
	[Jul19 03:37] kauditd_printk_skb: 29 callbacks suppressed
	[Jul19 03:38] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d5ff116ccff1] <==
	{"level":"info","ts":"2024-07-19T03:38:02.220088Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"58de0efec1d86300","remote-peer-id":"dcb4f5dcb4017fbf"}
	{"level":"info","ts":"2024-07-19T03:38:02.220096Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"58de0efec1d86300","remote-peer-id":"dcb4f5dcb4017fbf"}
	{"level":"info","ts":"2024-07-19T03:38:02.839158Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"dcb4f5dcb4017fbf"}
	{"level":"info","ts":"2024-07-19T03:38:02.839254Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"58de0efec1d86300","remote-peer-id":"dcb4f5dcb4017fbf"}
	{"level":"info","ts":"2024-07-19T03:38:02.849495Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"58de0efec1d86300","remote-peer-id":"dcb4f5dcb4017fbf"}
	{"level":"info","ts":"2024-07-19T03:38:02.849589Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"58de0efec1d86300","to":"dcb4f5dcb4017fbf","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-19T03:38:02.849603Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"58de0efec1d86300","remote-peer-id":"dcb4f5dcb4017fbf"}
	{"level":"info","ts":"2024-07-19T03:38:02.851115Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"58de0efec1d86300","to":"dcb4f5dcb4017fbf","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-19T03:38:02.851146Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"58de0efec1d86300","remote-peer-id":"dcb4f5dcb4017fbf"}
	{"level":"info","ts":"2024-07-19T03:38:03.239361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856 15903606512413671359)"}
	{"level":"info","ts":"2024-07-19T03:38:03.239499Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300"}
	{"level":"info","ts":"2024-07-19T03:38:03.239512Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"58de0efec1d86300","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"dcb4f5dcb4017fbf"}
	{"level":"warn","ts":"2024-07-19T03:38:38.860449Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":7133861002988229904,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-07-19T03:38:39.213772Z","caller":"traceutil/trace.go:171","msg":"trace[213955580] linearizableReadLoop","detail":"{readStateIndex:773; appliedIndex:773; }","duration":"854.090297ms","start":"2024-07-19T03:38:38.359661Z","end":"2024-07-19T03:38:39.213752Z","steps":["trace[213955580] 'read index received'  (duration: 854.085672ms)","trace[213955580] 'applied index is now lower than readState.Index'  (duration: 1.458µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T03:38:39.214653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"854.964275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.105.5\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-19T03:38:39.214668Z","caller":"traceutil/trace.go:171","msg":"trace[64905690] range","detail":"{range_begin:/registry/masterleases/192.168.105.5; range_end:; response_count:1; response_revision:726; }","duration":"855.016063ms","start":"2024-07-19T03:38:38.359648Z","end":"2024-07-19T03:38:39.214664Z","steps":["trace[64905690] 'agreement among raft nodes before linearized reading'  (duration: 854.846409ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.214698Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.359622Z","time spent":"855.063476ms","remote":"127.0.0.1:50924","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":156,"request content":"key:\"/registry/masterleases/192.168.105.5\" "}
	{"level":"warn","ts":"2024-07-19T03:38:39.217551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.784693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:38:39.217629Z","caller":"traceutil/trace.go:171","msg":"trace[485073674] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:726; }","duration":"181.858104ms","start":"2024-07-19T03:38:39.035755Z","end":"2024-07-19T03:38:39.217613Z","steps":["trace[485073674] 'agreement among raft nodes before linearized reading'  (duration: 181.775735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.218131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.961025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-07-19T03:38:39.218206Z","caller":"traceutil/trace.go:171","msg":"trace[1437088211] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:726; }","duration":"362.976608ms","start":"2024-07-19T03:38:38.855164Z","end":"2024-07-19T03:38:39.218141Z","steps":["trace[1437088211] 'agreement among raft nodes before linearized reading'  (duration: 362.940194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.218228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.855138Z","time spent":"363.085141ms","remote":"127.0.0.1:51114","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":457,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" "}
	{"level":"warn","ts":"2024-07-19T03:38:39.219731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.350481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:38:39.21976Z","caller":"traceutil/trace.go:171","msg":"trace[1532987535] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:726; }","duration":"513.381938ms","start":"2024-07-19T03:38:38.706374Z","end":"2024-07-19T03:38:39.219756Z","steps":["trace[1532987535] 'agreement among raft nodes before linearized reading'  (duration: 509.325689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.219771Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.706284Z","time spent":"513.484013ms","remote":"127.0.0.1:50868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 03:39:53 up 3 min,  0 users,  load average: 0.16, 0.23, 0.10
	Linux ha-256000 5.10.207 #1 SMP PREEMPT Thu Jul 18 19:24:21 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [74fc7ee22131] <==
	I0719 03:38:49.214882       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:38:59.217347       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:38:59.217439       1 main.go:303] handling current node
	I0719 03:38:59.217457       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:38:59.217471       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:39:09.209363       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:39:09.209378       1 main.go:303] handling current node
	I0719 03:39:09.209386       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:39:09.209389       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:39:19.214175       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:39:19.214198       1 main.go:303] handling current node
	I0719 03:39:19.214208       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:39:19.214210       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:39:29.212679       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:39:29.212700       1 main.go:303] handling current node
	I0719 03:39:29.212709       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:39:29.212712       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:39:39.216563       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:39:39.216588       1 main.go:303] handling current node
	I0719 03:39:39.216597       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:39:39.216600       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:39:49.213248       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:39:49.213266       1 main.go:303] handling current node
	I0719 03:39:49.213274       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:39:49.213276       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [dff4e67b6680] <==
	I0719 03:36:37.251023       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 03:36:37.253149       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 03:36:37.253194       1 aggregator.go:165] initial CRD sync complete...
	I0719 03:36:37.253205       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 03:36:37.253211       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 03:36:37.253217       1 cache.go:39] Caches are synced for autoregister controller
	I0719 03:36:37.268298       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 03:36:38.152171       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0719 03:36:38.153736       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0719 03:36:38.153745       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 03:36:38.302580       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 03:36:38.313862       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 03:36:38.355728       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0719 03:36:38.357891       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0719 03:36:38.358258       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 03:36:38.359450       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 03:36:39.162576       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 03:36:39.259455       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 03:36:39.263308       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 03:36:39.266876       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 03:36:53.692820       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 03:36:53.723447       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0719 03:38:39.230077       1 trace.go:236] Trace[99535700]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.105.5,type:*v1.Endpoints,resource:apiServerIPInfo (19-Jul-2024 03:38:38.359) (total time: 870ms):
	Trace[99535700]: ---"initial value restored" 856ms (03:38:39.216)
	Trace[99535700]: [870.770259ms] [870.770259ms] END
	
	
	==> kube-controller-manager [70ffd55232c0] <==
	I0719 03:36:53.803537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.987083ms"
	I0719 03:36:53.806909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.242834ms"
	I0719 03:36:53.807043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.583µs"
	I0719 03:36:53.807127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="8.708µs"
	I0719 03:36:53.862883       1 shared_informer.go:320] Caches are synced for persistent volume
	I0719 03:36:53.963310       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0719 03:36:53.964393       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 03:36:53.966963       1 shared_informer.go:320] Caches are synced for endpoint
	I0719 03:36:53.969983       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 03:36:54.380875       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 03:36:54.412561       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 03:36:54.412576       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 03:37:22.400084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.747µs"
	I0719 03:37:22.402636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.4µs"
	I0719 03:37:22.408319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.037µs"
	I0719 03:37:22.415741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.491µs"
	I0719 03:37:23.262808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="33.25µs"
	I0719 03:37:23.279353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.239521ms"
	I0719 03:37:23.279510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.085µs"
	I0719 03:37:23.294158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.586299ms"
	I0719 03:37:23.294186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.391µs"
	I0719 03:37:23.772649       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0719 03:38:01.950412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-256000-m02\" does not exist"
	I0719 03:38:01.956739       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-256000-m02" podCIDRs=["10.244.1.0/24"]
	I0719 03:38:03.779798       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-256000-m02"
	
	
	==> kube-proxy [9103cd3e30ac] <==
	I0719 03:36:54.228395       1 server_linux.go:69] "Using iptables proxy"
	I0719 03:36:54.235224       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.5"]
	I0719 03:36:54.286000       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 03:36:54.286028       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 03:36:54.286039       1 server_linux.go:165] "Using iptables Proxier"
	I0719 03:36:54.287034       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 03:36:54.287396       1 server.go:872] "Version info" version="v1.30.3"
	I0719 03:36:54.287403       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:36:54.288184       1 config.go:192] "Starting service config controller"
	I0719 03:36:54.288259       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 03:36:54.288280       1 config.go:319] "Starting node config controller"
	I0719 03:36:54.288282       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 03:36:54.289304       1 config.go:101] "Starting endpoint slice config controller"
	I0719 03:36:54.289308       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 03:36:54.388688       1 shared_informer.go:320] Caches are synced for node config
	I0719 03:36:54.388711       1 shared_informer.go:320] Caches are synced for service config
	I0719 03:36:54.389972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [29f96bba40d3] <==
	W0719 03:36:37.216352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:37.216355       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:37.216366       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 03:36:37.216373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 03:36:37.216385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:37.216388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:37.216419       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 03:36:37.216424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 03:36:37.216440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 03:36:37.216444       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 03:36:37.216461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 03:36:37.216464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 03:36:37.216476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 03:36:37.216491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 03:36:37.216504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:37.216507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.043369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 03:36:38.043491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 03:36:38.078796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:38.078841       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.135286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:38.135302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.143595       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 03:36:38.143607       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 03:36:40.612937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 03:37:22 ha-256000 kubelet[2215]: I0719 03:37:22.401650    2215 topology_manager.go:215] "Topology Admit Handler" podUID="06887cbc-e34e-460e-bc61-28fd45550399" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gl7wn"
	Jul 19 03:37:22 ha-256000 kubelet[2215]: I0719 03:37:22.402445    2215 topology_manager.go:215] "Topology Admit Handler" podUID="3a11238c-96dd-4d66-8983-8cdcacaa8e46" podNamespace="kube-system" podName="storage-provisioner"
	Jul 19 03:37:22 ha-256000 kubelet[2215]: I0719 03:37:22.463769    2215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a3f41b1-8454-4c68-aed4-7956c9f880eb-config-volume\") pod \"coredns-7db6d8ff4d-t5fk7\" (UID: \"3a3f41b1-8454-4c68-aed4-7956c9f880eb\") " pod="kube-system/coredns-7db6d8ff4d-t5fk7"
	Jul 19 03:37:22 ha-256000 kubelet[2215]: I0719 03:37:22.463806    2215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06887cbc-e34e-460e-bc61-28fd45550399-config-volume\") pod \"coredns-7db6d8ff4d-gl7wn\" (UID: \"06887cbc-e34e-460e-bc61-28fd45550399\") " pod="kube-system/coredns-7db6d8ff4d-gl7wn"
	Jul 19 03:37:22 ha-256000 kubelet[2215]: I0719 03:37:22.463816    2215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-968nt\" (UniqueName: \"kubernetes.io/projected/3a3f41b1-8454-4c68-aed4-7956c9f880eb-kube-api-access-968nt\") pod \"coredns-7db6d8ff4d-t5fk7\" (UID: \"3a3f41b1-8454-4c68-aed4-7956c9f880eb\") " pod="kube-system/coredns-7db6d8ff4d-t5fk7"
	Jul 19 03:37:22 ha-256000 kubelet[2215]: I0719 03:37:22.463826    2215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xpf\" (UniqueName: \"kubernetes.io/projected/06887cbc-e34e-460e-bc61-28fd45550399-kube-api-access-q6xpf\") pod \"coredns-7db6d8ff4d-gl7wn\" (UID: \"06887cbc-e34e-460e-bc61-28fd45550399\") " pod="kube-system/coredns-7db6d8ff4d-gl7wn"
	Jul 19 03:37:22 ha-256000 kubelet[2215]: I0719 03:37:22.463834    2215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g4f9\" (UniqueName: \"kubernetes.io/projected/3a11238c-96dd-4d66-8983-8cdcacaa8e46-kube-api-access-7g4f9\") pod \"storage-provisioner\" (UID: \"3a11238c-96dd-4d66-8983-8cdcacaa8e46\") " pod="kube-system/storage-provisioner"
	Jul 19 03:37:22 ha-256000 kubelet[2215]: I0719 03:37:22.463844    2215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3a11238c-96dd-4d66-8983-8cdcacaa8e46-tmp\") pod \"storage-provisioner\" (UID: \"3a11238c-96dd-4d66-8983-8cdcacaa8e46\") " pod="kube-system/storage-provisioner"
	Jul 19 03:37:23 ha-256000 kubelet[2215]: I0719 03:37:23.261847    2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gl7wn" podStartSLOduration=30.261832097 podStartE2EDuration="30.261832097s" podCreationTimestamp="2024-07-19 03:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 03:37:23.260385646 +0000 UTC m=+44.232521554" watchObservedRunningTime="2024-07-19 03:37:23.261832097 +0000 UTC m=+44.233968046"
	Jul 19 03:37:23 ha-256000 kubelet[2215]: I0719 03:37:23.287320    2215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=29.287306767 podStartE2EDuration="29.287306767s" podCreationTimestamp="2024-07-19 03:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 03:37:23.27698441 +0000 UTC m=+44.249120318" watchObservedRunningTime="2024-07-19 03:37:23.287306767 +0000 UTC m=+44.259442717"
	Jul 19 03:37:39 ha-256000 kubelet[2215]: E0719 03:37:39.079717    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:37:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:37:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:37:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:37:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:38:39 ha-256000 kubelet[2215]: E0719 03:38:39.085652    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:38:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:38:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:38:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:38:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:39:39 ha-256000 kubelet[2215]: E0719 03:39:39.080159    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:39:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:39:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:39:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:39:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ha-256000 -n ha-256000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-256000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (227.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (703.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- rollout status deployment/busybox
E0718 20:40:13.021663    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:13.028000    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:13.040090    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:13.062150    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:13.104210    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:13.186270    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:13.348107    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:13.669208    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:14.311367    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:15.593447    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:18.155509    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:23.277510    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:33.519360    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:40:54.000929    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:41:34.961955    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:42:56.881873    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:43:59.667022    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:45:13.013362    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:45:40.719490    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
E0718 20:48:59.658674    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-256000 -- rollout status deployment/busybox: exit status 1 (10m3.323481167s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 out of 3 new replicas have been updated...
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 6 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 2 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0718 20:50:13.005116    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0718 20:50:22.724596    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-5922h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-bqdhb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-hkhd4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-hkhd4 -- nslookup kubernetes.io: exit status 1 (80.35075ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-hkhd4 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-hkhd4 could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-5922h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-bqdhb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-hkhd4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-hkhd4 -- nslookup kubernetes.default: exit status 1 (78.252375ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-hkhd4 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-hkhd4 could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-5922h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-bqdhb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-hkhd4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-hkhd4 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (79.70875ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-hkhd4 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-hkhd4 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ha-256000 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-020000 image ls           | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	| delete  | -p functional-020000                 | functional-020000 | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT | 18 Jul 24 20:36 PDT |
	| start   | -p ha-256000 --wait=true             | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:36 PDT |                     |
	|         | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|         | --driver=qemu2                       |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- apply -f             | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:39 PDT | 18 Jul 24 20:39 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- rollout status       | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:39 PDT |                     |
	|         | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:49 PDT | 18 Jul 24 20:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:49 PDT | 18 Jul 24 20:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000         | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:36:07
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:36:07.154539    4727 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:36:07.154652    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154655    4727 out.go:304] Setting ErrFile to fd 2...
	I0718 20:36:07.154657    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154787    4727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:36:07.155777    4727 out.go:298] Setting JSON to false
	I0718 20:36:07.172062    4727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2135,"bootTime":1721358032,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:36:07.172136    4727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:36:07.175769    4727 out.go:177] * [ha-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 20:36:07.182867    4727 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:36:07.182897    4727 notify.go:220] Checking for updates...
	I0718 20:36:07.188814    4727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:07.191895    4727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:36:07.192950    4727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:36:07.195871    4727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 20:36:07.198897    4727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:36:07.202011    4727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:36:07.205826    4727 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 20:36:07.212869    4727 start.go:297] selected driver: qemu2
	I0718 20:36:07.212875    4727 start.go:901] validating driver "qemu2" against <nil>
	I0718 20:36:07.212880    4727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:36:07.215027    4727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:36:07.217921    4727 out.go:177] * Automatically selected the socket_vmnet network
	I0718 20:36:07.220933    4727 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:36:07.220960    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:07.220968    4727 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0718 20:36:07.220971    4727 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0718 20:36:07.220995    4727 start.go:340] cluster config:
	{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:07.224405    4727 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:36:07.231878    4727 out.go:177] * Starting "ha-256000" primary control-plane node in "ha-256000" cluster
	I0718 20:36:07.235849    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:07.235880    4727 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 20:36:07.235892    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:07.235960    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:07.235965    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:07.236167    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:07.236181    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json: {Name:mk4f96c33b167a65b92bd4e48e5f1a3c7a52bbe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:07.236387    4727 start.go:360] acquireMachinesLock for ha-256000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:07.236422    4727 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "ha-256000"
	I0718 20:36:07.236432    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:07.236461    4727 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 20:36:07.243901    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:07.268930    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:07.268958    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:07.269026    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:07.269056    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269065    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269104    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:07.269127    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269136    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269466    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:07.395393    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:07.434010    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:07.434014    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:07.434195    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.445169    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.445186    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.445241    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2 +20000M
	I0718 20:36:07.453205    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:07.453220    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.453236    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.453239    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:07.453248    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:07.453278    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:e3:ed:16:92:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.491921    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.491947    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.491951    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:07.491963    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:07.492029    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:07.492048    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:07.492054    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:07.492061    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:07.492067    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:09.494175    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:09.494254    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:09.494618    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:09.494729    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:09.494764    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:09.494789    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:09.494817    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:11.496994    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:11.497242    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:11.497663    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:11.497717    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:11.497756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:11.497787    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:11.497819    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:13.500006    4727 main.go:141] libmachine: Attempt 3
	I0718 20:36:13.500080    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:13.500185    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:13.500200    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:13.500205    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:13.500210    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:13.500216    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:15.502208    4727 main.go:141] libmachine: Attempt 4
	I0718 20:36:15.502220    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:15.502255    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:15.502275    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:15.502280    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:15.502285    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:15.502290    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:17.504286    4727 main.go:141] libmachine: Attempt 5
	I0718 20:36:17.504293    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:17.504346    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:17.504356    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:17.504360    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:17.504364    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:17.504369    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:19.506369    4727 main.go:141] libmachine: Attempt 6
	I0718 20:36:19.506395    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:19.506467    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:19.506476    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:19.506481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:19.506485    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:19.506490    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:21.508527    4727 main.go:141] libmachine: Attempt 7
	I0718 20:36:21.508554    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:21.508694    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:21.508708    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:21.508719    4727 main.go:141] libmachine: Found match: 6a:e3:ed:16:92:d5
	I0718 20:36:21.508730    4727 main.go:141] libmachine: IP: 192.168.105.5
	I0718 20:36:21.508735    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0718 20:36:22.527247    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:36:22.527480    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.527975    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.527990    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:36:22.610697    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:36:22.610726    4727 buildroot.go:166] provisioning hostname "ha-256000"
	I0718 20:36:22.610824    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.611097    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.611107    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000 && echo "ha-256000" | sudo tee /etc/hostname
	I0718 20:36:22.682492    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000
	
	I0718 20:36:22.682552    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.682702    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.682713    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:36:22.742479    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:36:22.742492    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:36:22.742500    4727 buildroot.go:174] setting up certificates
	I0718 20:36:22.742504    4727 provision.go:84] configureAuth start
	I0718 20:36:22.742508    4727 provision.go:143] copyHostCerts
	I0718 20:36:22.742542    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742586    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:36:22.742593    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742831    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:36:22.743010    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743030    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:36:22.743033    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743097    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:36:22.743184    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743212    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:36:22.743215    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743275    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:36:22.743373    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000 san=[127.0.0.1 192.168.105.5 ha-256000 localhost minikube]
	I0718 20:36:22.831924    4727 provision.go:177] copyRemoteCerts
	I0718 20:36:22.831953    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:36:22.831960    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:22.861471    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:36:22.861517    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:36:22.869576    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:36:22.869616    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0718 20:36:22.877642    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:36:22.877682    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 20:36:22.885597    4727 provision.go:87] duration metric: took 143.091583ms to configureAuth
	I0718 20:36:22.885605    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:36:22.885700    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:22.885731    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.885814    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.885819    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:36:22.939257    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:36:22.939268    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:36:22.939327    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:36:22.939382    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.939495    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.939529    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:36:22.999120    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:36:22.999176    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.999299    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.999307    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:36:24.399001    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:36:24.399014    4727 machine.go:97] duration metric: took 1.871786709s to provisionDockerMachine
	I0718 20:36:24.399020    4727 client.go:171] duration metric: took 17.130530167s to LocalClient.Create
	I0718 20:36:24.399035    4727 start.go:167] duration metric: took 17.130580916s to libmachine.API.Create "ha-256000"
	I0718 20:36:24.399041    4727 start.go:293] postStartSetup for "ha-256000" (driver="qemu2")
	I0718 20:36:24.399047    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:36:24.399133    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:36:24.399144    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.429882    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:36:24.431446    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:36:24.431458    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:36:24.431559    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:36:24.431674    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:36:24.431679    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:36:24.431800    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:36:24.434949    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:24.443099    4727 start.go:296] duration metric: took 44.054208ms for postStartSetup
	I0718 20:36:24.443547    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:24.443727    4727 start.go:128] duration metric: took 17.207737166s to createHost
	I0718 20:36:24.443753    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:24.443841    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:24.443845    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:36:24.496185    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360184.183489336
	
	I0718 20:36:24.496191    4727 fix.go:216] guest clock: 1721360184.183489336
	I0718 20:36:24.496195    4727 fix.go:229] Guest: 2024-07-18 20:36:24.183489336 -0700 PDT Remote: 2024-07-18 20:36:24.44373 -0700 PDT m=+17.308254043 (delta=-260.240664ms)
	I0718 20:36:24.496206    4727 fix.go:200] guest clock delta is within tolerance: -260.240664ms
	I0718 20:36:24.496210    4727 start.go:83] releasing machines lock for "ha-256000", held for 17.260259709s
	I0718 20:36:24.496487    4727 ssh_runner.go:195] Run: cat /version.json
	I0718 20:36:24.496496    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.498161    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:36:24.498180    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.526501    4727 ssh_runner.go:195] Run: systemctl --version
	I0718 20:36:24.575612    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0718 20:36:24.577665    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:36:24.577696    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:36:24.584047    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:36:24.584056    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.584135    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.590860    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:36:24.594365    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:36:24.597804    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.597834    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:36:24.601501    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.605402    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:36:24.609279    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.613150    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:36:24.616783    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:36:24.620826    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:36:24.624868    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:36:24.628746    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:36:24.632406    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:36:24.635998    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:24.719937    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:36:24.727107    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.727172    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:36:24.734556    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.745145    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:36:24.752682    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.758405    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.763722    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:36:24.804424    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.810784    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.817505    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:36:24.818968    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:36:24.822004    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:36:24.827814    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:36:24.912234    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:36:24.993893    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.993951    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:36:25.000295    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:25.079893    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:27.267877    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.188026583s)
	I0718 20:36:27.267954    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:36:27.273388    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:36:27.280952    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.286424    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:36:27.376871    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:36:27.462186    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.546490    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:36:27.553023    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.558470    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.643444    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:36:27.668876    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:36:27.669018    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:36:27.671231    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:36:27.671271    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:36:27.672746    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:36:27.689183    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:36:27.689243    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.699313    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.710299    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:36:27.710436    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:36:27.711936    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:27.716497    4727 kubeadm.go:883] updating cluster {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0718 20:36:27.716547    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:27.716590    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:27.721193    4727 docker.go:685] Got preloaded images: 
	I0718 20:36:27.721201    4727 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0718 20:36:27.721249    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:27.725068    4727 ssh_runner.go:195] Run: which lz4
	I0718 20:36:27.726303    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0718 20:36:27.726385    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0718 20:36:27.727841    4727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0718 20:36:27.727857    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335411903 bytes)
	I0718 20:36:29.032881    4727 docker.go:649] duration metric: took 1.306555792s to copy over tarball
	I0718 20:36:29.032945    4727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0718 20:36:30.077797    4727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.044866416s)
	I0718 20:36:30.077812    4727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0718 20:36:30.092929    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:30.096929    4727 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0718 20:36:30.102897    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:30.190133    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:32.408215    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.218126791s)
	I0718 20:36:32.408325    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:32.414564    4727 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 20:36:32.414576    4727 cache_images.go:84] Images are preloaded, skipping loading
	I0718 20:36:32.414588    4727 kubeadm.go:934] updating node { 192.168.105.5 8443 v1.30.3 docker true true} ...
	I0718 20:36:32.414662    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:36:32.414717    4727 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 20:36:32.422967    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:32.422975    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:32.422989    4727 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0718 20:36:32.423001    4727 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-256000 NodeName:ha-256000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 20:36:32.423064    4727 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-256000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 20:36:32.423074    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:36:32.423127    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:36:32.430238    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:36:32.430293    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:36:32.430329    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:36:32.433734    4727 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 20:36:32.433764    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0718 20:36:32.437628    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0718 20:36:32.443760    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:36:32.449483    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0718 20:36:32.455815    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0718 20:36:32.461759    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:36:32.463168    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:32.467182    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:32.556522    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:36:32.567007    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.5
	I0718 20:36:32.567019    4727 certs.go:194] generating shared ca certs ...
	I0718 20:36:32.567029    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.567195    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:36:32.567242    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:36:32.567249    4727 certs.go:256] generating profile certs ...
	I0718 20:36:32.567287    4727 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:36:32.567299    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt with IP's: []
	I0718 20:36:32.629331    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt ...
	I0718 20:36:32.629341    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt: {Name:mkc9c3e562115edef8b85e012e81a3eb4a2cf75a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key ...
	I0718 20:36:32.629649    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key: {Name:mkb41caa35d055a2dcb04d364862addacfff33bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629781    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4
	I0718 20:36:32.629789    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.254]
	I0718 20:36:32.695617    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 ...
	I0718 20:36:32.695626    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4: {Name:mkee89910ca1db08ac083863b0e4a027ae270203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696056    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 ...
	I0718 20:36:32.696061    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4: {Name:mk8365902b4e9f071c9404629a4b35cc6ca6ebbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696198    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:36:32.696306    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:36:32.696557    4727 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:36:32.696565    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt with IP's: []
	I0718 20:36:32.762976    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt ...
	I0718 20:36:32.762980    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt: {Name:mkb3e0281e7ef362624ad24bb17cfb244b9bc171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763112    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key ...
	I0718 20:36:32.763115    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key: {Name:mkc06a04ddb3616913d2c6f5647bad25fef6f42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763224    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:36:32.763237    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:36:32.763247    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:36:32.763257    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:36:32.763268    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:36:32.763279    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:36:32.763290    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:36:32.763301    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:36:32.763382    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:36:32.763410    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:36:32.763415    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:36:32.763434    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:36:32.763451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:36:32.763468    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:36:32.763505    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:32.763524    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.763535    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.763546    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.763807    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:36:32.773281    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:36:32.781447    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:36:32.789770    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:36:32.798040    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0718 20:36:32.806232    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:36:32.814458    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:36:32.822522    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:36:32.830515    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:36:32.838566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:36:32.846581    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:36:32.854568    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 20:36:32.860769    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:36:32.863035    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:36:32.867352    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868859    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868879    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.870984    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:36:32.874504    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:36:32.878096    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879659    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879678    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.881640    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:36:32.885559    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:36:32.889461    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891114    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891133    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.893171    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:36:32.897112    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:36:32.898621    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:36:32.898660    4727 kubeadm.go:392] StartCluster: {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clus
terName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:32.898726    4727 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 20:36:32.903849    4727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 20:36:32.907545    4727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 20:36:32.910740    4727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 20:36:32.914021    4727 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 20:36:32.914030    4727 kubeadm.go:157] found existing configuration files:
	
	I0718 20:36:32.914050    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0718 20:36:32.917254    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 20:36:32.917277    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 20:36:32.920874    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0718 20:36:32.924549    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 20:36:32.924574    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 20:36:32.928189    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.931542    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 20:36:32.931572    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.934804    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0718 20:36:32.937825    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 20:36:32.937847    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 20:36:32.941208    4727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0718 20:36:32.964473    4727 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0718 20:36:32.964502    4727 kubeadm.go:310] [preflight] Running pre-flight checks
	I0718 20:36:33.010272    4727 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 20:36:33.010346    4727 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 20:36:33.010394    4727 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 20:36:33.080896    4727 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 20:36:33.088116    4727 out.go:204]   - Generating certificates and keys ...
	I0718 20:36:33.088149    4727 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0718 20:36:33.088180    4727 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0718 20:36:33.187618    4727 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0718 20:36:33.225765    4727 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0718 20:36:33.439485    4727 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0718 20:36:33.599214    4727 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0718 20:36:33.681357    4727 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0718 20:36:33.681418    4727 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.726840    4727 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0718 20:36:33.726901    4727 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.875169    4727 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0718 20:36:34.071575    4727 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0718 20:36:34.163748    4727 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0718 20:36:34.163778    4727 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 20:36:34.260583    4727 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 20:36:34.352375    4727 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0718 20:36:34.395125    4727 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 20:36:34.512349    4727 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 20:36:34.655223    4727 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 20:36:34.655381    4727 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 20:36:34.656483    4727 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 20:36:34.666848    4727 out.go:204]   - Booting up control plane ...
	I0718 20:36:34.666901    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 20:36:34.666950    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 20:36:34.666982    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 20:36:34.667031    4727 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 20:36:34.667081    4727 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 20:36:34.667103    4727 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0718 20:36:34.759306    4727 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0718 20:36:34.759350    4727 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0718 20:36:35.263383    4727 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.7975ms
	I0718 20:36:35.263624    4727 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0718 20:36:38.766721    4727 kubeadm.go:310] [api-check] The API server is healthy after 3.504642043s
	I0718 20:36:38.772139    4727 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 20:36:38.775784    4727 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 20:36:38.782114    4727 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0718 20:36:38.782191    4727 kubeadm.go:310] [mark-control-plane] Marking the node ha-256000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 20:36:38.784595    4727 kubeadm.go:310] [bootstrap-token] Using token: yv8fsh.sh51yi31jewcw15j
	I0718 20:36:38.788784    4727 out.go:204]   - Configuring RBAC rules ...
	I0718 20:36:38.788835    4727 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 20:36:38.790051    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 20:36:38.796261    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 20:36:38.797188    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 20:36:38.797986    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 20:36:38.798957    4727 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 20:36:39.169725    4727 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 20:36:39.576005    4727 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0718 20:36:40.169284    4727 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0718 20:36:40.169608    4727 kubeadm.go:310] 
	I0718 20:36:40.169641    4727 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0718 20:36:40.169646    4727 kubeadm.go:310] 
	I0718 20:36:40.169692    4727 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0718 20:36:40.169695    4727 kubeadm.go:310] 
	I0718 20:36:40.169709    4727 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0718 20:36:40.169760    4727 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 20:36:40.169794    4727 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 20:36:40.169797    4727 kubeadm.go:310] 
	I0718 20:36:40.169826    4727 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0718 20:36:40.169830    4727 kubeadm.go:310] 
	I0718 20:36:40.169856    4727 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 20:36:40.169858    4727 kubeadm.go:310] 
	I0718 20:36:40.169883    4727 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0718 20:36:40.169938    4727 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 20:36:40.169984    4727 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 20:36:40.169987    4727 kubeadm.go:310] 
	I0718 20:36:40.170044    4727 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0718 20:36:40.170090    4727 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0718 20:36:40.170093    4727 kubeadm.go:310] 
	I0718 20:36:40.170134    4727 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170222    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc \
	I0718 20:36:40.170234    4727 kubeadm.go:310] 	--control-plane 
	I0718 20:36:40.170242    4727 kubeadm.go:310] 
	I0718 20:36:40.170285    4727 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0718 20:36:40.170299    4727 kubeadm.go:310] 
	I0718 20:36:40.170351    4727 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170426    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc 
	I0718 20:36:40.170492    4727 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 20:36:40.170502    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:40.170507    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:40.176555    4727 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0718 20:36:40.183616    4727 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0718 20:36:40.185686    4727 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0718 20:36:40.185696    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0718 20:36:40.191764    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0718 20:36:40.332259    4727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 20:36:40.332307    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.332337    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000 minikube.k8s.io/updated_at=2024_07_18T20_36_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=true
	I0718 20:36:40.385331    4727 ops.go:34] apiserver oom_adj: -16
	I0718 20:36:40.385383    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.887435    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.387480    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.887395    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.387370    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.885756    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.387374    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.886101    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.386656    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.887355    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.387330    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.887331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.386668    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.886398    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.385335    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.887237    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.387224    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.887271    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.387175    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.885647    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.387168    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.887214    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.387158    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.887129    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.387127    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.887088    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.387119    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.885301    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.387061    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.453749    4727 kubeadm.go:1113] duration metric: took 14.12187225s to wait for elevateKubeSystemPrivileges
	I0718 20:36:54.453766    4727 kubeadm.go:394] duration metric: took 21.55570275s to StartCluster
	I0718 20:36:54.453776    4727 settings.go:142] acquiring lock: {Name:mk9577e2a46ebc5e017130011eb528f9fea1ed10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.453868    4727 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.454239    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.454483    4727 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.454492    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:36:54.454494    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0718 20:36:54.454496    4727 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 20:36:54.454530    4727 addons.go:69] Setting storage-provisioner=true in profile "ha-256000"
	I0718 20:36:54.454533    4727 addons.go:69] Setting default-storageclass=true in profile "ha-256000"
	I0718 20:36:54.454543    4727 addons.go:234] Setting addon storage-provisioner=true in "ha-256000"
	I0718 20:36:54.454546    4727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-256000"
	I0718 20:36:54.454554    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.454722    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.455342    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.455486    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 20:36:54.455762    4727 cert_rotation.go:137] Starting client certificate rotation controller
	I0718 20:36:54.455811    4727 addons.go:234] Setting addon default-storageclass=true in "ha-256000"
	I0718 20:36:54.455823    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.460675    4727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 20:36:54.464747    4727 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.464758    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0718 20:36:54.464769    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.465436    4727 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.465440    4727 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0718 20:36:54.465444    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.511774    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.706626    4727 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0718 20:36:54.777305    4727 round_trippers.go:463] GET https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0718 20:36:54.777314    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.777318    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.777321    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.782732    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:36:54.783013    4727 round_trippers.go:463] PUT https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0718 20:36:54.783019    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.783023    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.783026    4727 round_trippers.go:473]     Content-Type: application/json
	I0718 20:36:54.783028    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.784014    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:36:54.792272    4727 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0718 20:36:54.793579    4727 addons.go:510] duration metric: took 339.092083ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0718 20:36:54.793593    4727 start.go:246] waiting for cluster config update ...
	I0718 20:36:54.793600    4727 start.go:255] writing updated cluster config ...
	I0718 20:36:54.798143    4727 out.go:177] 
	I0718 20:36:54.802340    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.802369    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.805206    4727 out.go:177] * Starting "ha-256000-m02" control-plane node in "ha-256000" cluster
	I0718 20:36:54.813295    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:54.813304    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:54.813383    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:54.813389    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:54.813425    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.813828    4727 start.go:360] acquireMachinesLock for ha-256000-m02: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:54.813863    4727 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "ha-256000-m02"
	I0718 20:36:54.813872    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:tr
ue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.813899    4727 start.go:125] createHost starting for "m02" (driver="qemu2")
	I0718 20:36:54.818236    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:54.833731    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:54.833754    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:54.833854    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:54.833891    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833898    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.833936    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:54.833959    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833965    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.834273    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:54.991167    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:55.074302    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:55.074313    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:55.074505    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.084177    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.084198    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.084247    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2 +20000M
	I0718 20:36:55.092640    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:55.092655    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.092668    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.092672    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:55.092685    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:55.092723    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e8:07:38:73:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.131373    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.131397    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.131401    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:55.131414    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:55.131476    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:55.131491    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:55.131496    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:55.131509    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:55.131515    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:55.131521    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:57.132241    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:57.132260    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:57.132370    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:57.132380    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:57.132387    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:57.132391    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:57.132399    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:57.132403    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:59.134429    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:59.134514    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:59.134610    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:59.134633    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:59.134640    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:59.134645    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:59.134650    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:59.134655    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:01.136704    4727 main.go:141] libmachine: Attempt 3
	I0718 20:37:01.136730    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:01.136864    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:01.136874    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:01.136879    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:01.136892    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:01.136897    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:01.136902    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:03.139087    4727 main.go:141] libmachine: Attempt 4
	I0718 20:37:03.139131    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:03.139262    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:03.139278    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:03.139286    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:03.139290    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:03.139295    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:03.139305    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:05.141342    4727 main.go:141] libmachine: Attempt 5
	I0718 20:37:05.141371    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:05.141487    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:05.141499    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:05.141504    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:05.141508    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:05.141513    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:05.141518    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:07.141729    4727 main.go:141] libmachine: Attempt 6
	I0718 20:37:07.141760    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:07.141844    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:07.141853    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:07.141858    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:07.141862    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:07.141866    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:07.141871    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:09.143893    4727 main.go:141] libmachine: Attempt 7
	I0718 20:37:09.143910    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:09.143997    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:37:09.144009    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:37:09.144011    4727 main.go:141] libmachine: Found match: 5a:e8:7:38:73:30
	I0718 20:37:09.144020    4727 main.go:141] libmachine: IP: 192.168.105.6
	I0718 20:37:09.144023    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0718 20:37:22.173394    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:37:22.173460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.173824    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.173832    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:37:22.224366    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:37:22.224379    4727 buildroot.go:166] provisioning hostname "ha-256000-m02"
	I0718 20:37:22.224437    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.224569    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.224574    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m02 && echo "ha-256000-m02" | sudo tee /etc/hostname
	I0718 20:37:22.281136    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m02
	
	I0718 20:37:22.281193    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.281326    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.281333    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:37:22.335405    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:37:22.335420    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:37:22.335427    4727 buildroot.go:174] setting up certificates
	I0718 20:37:22.335432    4727 provision.go:84] configureAuth start
	I0718 20:37:22.335436    4727 provision.go:143] copyHostCerts
	I0718 20:37:22.335460    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335499    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:37:22.335504    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335625    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:37:22.335755    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335793    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:37:22.335798    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335849    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:37:22.335937    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.335958    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:37:22.335961    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.336009    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:37:22.336098    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m02 san=[127.0.0.1 192.168.105.6 ha-256000-m02 localhost minikube]
	I0718 20:37:22.416839    4727 provision.go:177] copyRemoteCerts
	I0718 20:37:22.417292    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:37:22.417307    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:22.446250    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:37:22.446323    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:37:22.455193    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:37:22.455243    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:37:22.463182    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:37:22.463217    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:37:22.471841    4727 provision.go:87] duration metric: took 136.406375ms to configureAuth
	I0718 20:37:22.471860    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:37:22.472154    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:22.472192    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.472306    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.472312    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:37:22.520570    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:37:22.520580    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:37:22.520661    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:37:22.520720    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.520835    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.520884    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:37:22.573905    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:37:22.573954    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.574074    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.574082    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:37:23.946918    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:37:23.946932    4727 machine.go:97] duration metric: took 1.773574458s to provisionDockerMachine
	I0718 20:37:23.946948    4727 client.go:171] duration metric: took 29.113993584s to LocalClient.Create
	I0718 20:37:23.946964    4727 start.go:167] duration metric: took 29.114041166s to libmachine.API.Create "ha-256000"
	I0718 20:37:23.946968    4727 start.go:293] postStartSetup for "ha-256000-m02" (driver="qemu2")
	I0718 20:37:23.946975    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:37:23.947049    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:37:23.947059    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:23.975789    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:37:23.977316    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:37:23.977325    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:37:23.977414    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:37:23.977531    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:37:23.977538    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:37:23.977667    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:37:23.981129    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:23.989836    4727 start.go:296] duration metric: took 42.86225ms for postStartSetup
	I0718 20:37:23.990279    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:37:23.990466    4727 start.go:128] duration metric: took 29.177367125s to createHost
	I0718 20:37:23.990492    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:23.990582    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:23.990587    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:37:24.039991    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360244.056265969
	
	I0718 20:37:24.040003    4727 fix.go:216] guest clock: 1721360244.056265969
	I0718 20:37:24.040011    4727 fix.go:229] Guest: 2024-07-18 20:37:24.056265969 -0700 PDT Remote: 2024-07-18 20:37:23.990469 -0700 PDT m=+76.856635126 (delta=65.796969ms)
	I0718 20:37:24.040021    4727 fix.go:200] guest clock delta is within tolerance: 65.796969ms
	I0718 20:37:24.040027    4727 start.go:83] releasing machines lock for "ha-256000-m02", held for 29.226966s
	I0718 20:37:24.045188    4727 out.go:177] * Found network options:
	I0718 20:37:24.048256    4727 out.go:177]   - NO_PROXY=192.168.105.5
	W0718 20:37:24.052331    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:37:24.052639    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:37:24.052695    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:37:24.052702    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:24.052696    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:37:24.052803    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	W0718 20:37:24.080701    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:37:24.080760    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:37:24.120864    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:37:24.120877    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.120944    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.128913    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:37:24.133095    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:37:24.137320    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.137368    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:37:24.141513    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.145685    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:37:24.149674    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.153524    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:37:24.157504    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:37:24.161442    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:37:24.165217    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:37:24.169715    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:37:24.173504    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:37:24.177428    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.249585    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:37:24.258814    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.258889    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:37:24.266134    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.272789    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:37:24.282701    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.287831    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.293394    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:37:24.332150    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.338444    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.344970    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:37:24.346508    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:37:24.349662    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:37:24.355683    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:37:24.439008    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:37:24.522884    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.522913    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:37:24.529269    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.614408    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:37:26.705797    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.091426708s)
	I0718 20:37:26.705868    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:37:26.711797    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:37:26.719055    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.724747    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:37:26.813533    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:37:26.893596    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:26.965581    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:37:26.972962    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.978785    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:27.061213    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:37:27.087585    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:37:27.087659    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:37:27.091046    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:37:27.091097    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:37:27.092542    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:37:27.112215    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:37:27.112278    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.124950    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.136592    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:37:27.145555    4727 out.go:177]   - env NO_PROXY=192.168.105.5
	I0718 20:37:27.149713    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:37:27.151201    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:27.155414    4727 mustload.go:65] Loading cluster: ha-256000
	I0718 20:37:27.155551    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:27.156066    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:27.156157    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.6
	I0718 20:37:27.156161    4727 certs.go:194] generating shared ca certs ...
	I0718 20:37:27.156167    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.156269    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:37:27.156316    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:37:27.156321    4727 certs.go:256] generating profile certs ...
	I0718 20:37:27.156387    4727 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:37:27.156400    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9
	I0718 20:37:27.156410    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.6 192.168.105.254]
	I0718 20:37:27.328161    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 ...
	I0718 20:37:27.328188    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9: {Name:mkff536dfdabd0cc9a693525dd142a97006d4485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 ...
	I0718 20:37:27.328655    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9: {Name:mkb963d77aed955311589ae3cd9371dca3b50bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328816    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:37:27.328945    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:37:27.329100    4727 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:37:27.329110    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:37:27.329125    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:37:27.329137    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:37:27.329150    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:37:27.329162    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:37:27.329176    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:37:27.329186    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:37:27.329197    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:37:27.329271    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:37:27.329299    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:37:27.329305    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:37:27.329347    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:37:27.329372    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:37:27.329396    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:37:27.329451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:27.329478    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.329491    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.329501    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.329519    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:27.355925    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0718 20:37:27.357647    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0718 20:37:27.362088    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0718 20:37:27.363733    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0718 20:37:27.367759    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0718 20:37:27.369261    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0718 20:37:27.373839    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0718 20:37:27.375475    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0718 20:37:27.379174    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0718 20:37:27.380628    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0718 20:37:27.384809    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0718 20:37:27.386562    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0718 20:37:27.390606    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:37:27.399865    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:37:27.408308    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:37:27.416747    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:37:27.425050    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0718 20:37:27.433244    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:37:27.441306    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:37:27.449446    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:37:27.457566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:37:27.465676    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:37:27.473743    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:37:27.482174    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0718 20:37:27.487947    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0718 20:37:27.493902    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0718 20:37:27.499712    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0718 20:37:27.505265    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0718 20:37:27.511047    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0718 20:37:27.517340    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0718 20:37:27.523229    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:37:27.525438    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:37:27.529080    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530597    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530617    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.532775    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:37:27.536483    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:37:27.540031    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541631    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541649    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.543631    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:37:27.547571    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:37:27.551419    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553057    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553079    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.555162    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:37:27.559227    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:37:27.560725    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:37:27.560754    4727 kubeadm.go:934] updating node {m02 192.168.105.6 8443 v1.30.3 docker true true} ...
	I0718 20:37:27.560799    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:37:27.560814    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:37:27.560837    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:37:27.572539    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:37:27.572577    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:37:27.572623    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.576082    4727 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0718 20:37:27.576121    4727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm
	I0718 20:37:27.579785    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet
	I0718 20:37:34.561853    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.561928    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.564073    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0718 20:37:34.564095    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (49938584 bytes)
	I0718 20:37:35.510887    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.510952    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.512864    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0718 20:37:35.512884    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (48955544 bytes)
	I0718 20:37:42.606961    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:37:42.613080    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.613168    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.614817    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0718 20:37:42.614833    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (96467384 bytes)
	I0718 20:37:43.119287    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0718 20:37:43.122637    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0718 20:37:43.128732    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:37:43.134516    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0718 20:37:43.141275    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:37:43.142606    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:43.146857    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:43.230113    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:37:43.243145    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:43.243333    4727 start.go:317] joinCluster: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluste
rName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:37:43.243382    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0718 20:37:43.243391    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:43.371073    4727 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:37:43.371092    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443"
	I0718 20:38:03.232381    4727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443": (19.861822375s)
	I0718 20:38:03.232396    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0718 20:38:03.485331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000-m02 minikube.k8s.io/updated_at=2024_07_18T20_38_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=false
	I0718 20:38:03.530961    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-256000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0718 20:38:03.578648    4727 start.go:319] duration metric: took 20.3358655s to joinCluster
	I0718 20:38:03.578688    4727 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:03.578898    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:03.583884    4727 out.go:177] * Verifying Kubernetes components...
	I0718 20:38:03.590972    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:03.702999    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:38:03.709797    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:38:03.709929    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0718 20:38:03.709957    4727 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.105.254:8443 with https://192.168.105.5:8443
	I0718 20:38:03.710058    4727 node_ready.go:35] waiting up to 6m0s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:03.710093    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:03.710097    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:03.710101    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:03.710109    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:03.716299    4727 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0718 20:38:04.212157    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.212175    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.212180    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.212182    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.217870    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:38:04.711681    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.711692    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.711696    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.711698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.713463    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.212138    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.212149    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.212153    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.212156    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.214175    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:05.711331    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.711345    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.711360    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.711363    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.712682    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.713155    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:06.210250    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.210264    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.210268    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.210271    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.212254    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:06.711235    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.711255    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.711260    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.711262    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.712940    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.212089    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.212100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.212104    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.212106    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.214317    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:07.712070    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.712079    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.712083    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.712086    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.713825    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.714102    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:08.211862    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.211878    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.211883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.211885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.213993    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:08.712062    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.712075    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.712079    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.712081    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.713753    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.212027    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.212036    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.212052    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.212055    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.213833    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.712020    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.712029    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.712033    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.712035    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.713439    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.212016    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.212025    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.212029    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.212031    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.213662    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.213924    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:10.711085    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.711100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.711114    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.711117    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.712848    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.211980    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.211995    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.211999    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.212002    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.213760    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.711981    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.711994    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.712005    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.712008    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.713435    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.211955    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.211969    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.211974    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.211976    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.213759    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.214202    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:12.711912    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.711929    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.711933    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.711935    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.713382    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.211920    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.211932    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.211941    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.211943    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.213828    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.711194    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.711206    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.711209    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.711211    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.712757    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:14.211901    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.211919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.211924    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.211932    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.213956    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:14.214285    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:14.711860    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.711876    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.711883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.711885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.713170    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.211895    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.211907    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.211911    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.211913    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.213693    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.711835    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.711849    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.711863    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.711865    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.713487    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.211839    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.211844    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.211846    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.213365    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.711659    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.711669    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.711673    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.711675    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.713252    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.713433    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:17.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.211830    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.211834    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.211836    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.213413    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:17.711756    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.711781    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.711785    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.711788    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.713341    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.211779    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.211794    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.211798    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.211800    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.213551    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.711749    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.711759    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.711764    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.711766    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.713325    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.713645    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:19.211738    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.211750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.211754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.211756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.213507    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:19.711717    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.711731    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.711734    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.711736    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.713476    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.211230    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.211271    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.211314    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.211318    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.212922    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.710773    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.710783    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.710787    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.710790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.712163    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.211705    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.211717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.211738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.211742    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.213362    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.213898    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:21.711683    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.711698    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.711702    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.711704    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.713411    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.211928    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.211938    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.211942    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.211944    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.214292    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.214473    4727 node_ready.go:49] node "ha-256000-m02" has status "Ready":"True"
	I0718 20:38:22.214479    4727 node_ready.go:38] duration metric: took 18.50492425s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:22.214483    4727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:22.214513    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:22.214523    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.214528    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.214533    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.216823    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.221656    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.221688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gl7wn
	I0718 20:38:22.221691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.221695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.221698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.223037    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.223438    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.223443    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.223447    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.223449    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.224627    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.224906    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.224912    4727 pod_ready.go:81] duration metric: took 3.247917ms for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224916    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224935    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t5fk7
	I0718 20:38:22.224937    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.224950    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.224954    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.226106    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.226400    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.226404    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.226411    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.226414    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.227526    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.227886    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.227891    4727 pod_ready.go:81] duration metric: took 2.972458ms for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227894    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227913    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000
	I0718 20:38:22.227919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.227923    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.227925    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.228991    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.229395    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.229399    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.229402    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.229406    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.230465    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.230693    4727 pod_ready.go:92] pod "etcd-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.230699    4727 pod_ready.go:81] duration metric: took 2.801916ms for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230703    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230720    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000-m02
	I0718 20:38:22.230723    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.230726    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.230728    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.231834    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.232263    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.232268    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.232271    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.232273    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.233360    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.233783    4727 pod_ready.go:92] pod "etcd-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.233789    4727 pod_ready.go:81] duration metric: took 3.083416ms for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.233794    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.413762    4727 request.go:629] Waited for 179.941666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413824    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413828    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.413841    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.413846    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.415462    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.613785    4727 request.go:629] Waited for 197.877917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613838    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613844    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.613847    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.613849    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.616581    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.616806    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.616814    4727 pod_ready.go:81] duration metric: took 383.02725ms for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.616819    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.813743    4727 request.go:629] Waited for 196.894708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813781    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813784    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.813788    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.813790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.815511    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.012375    4727 request.go:629] Waited for 196.496584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012418    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012422    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.012426    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.012428    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.014100    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.014297    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.014304    4727 pod_ready.go:81] duration metric: took 397.4915ms for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.014308    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.213728    4727 request.go:629] Waited for 199.392916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213764    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213767    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.213771    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.213774    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.215292    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.412016    4727 request.go:629] Waited for 196.230667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412048    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412050    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.412055    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.412057    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.414117    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.414317    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.414324    4727 pod_ready.go:81] duration metric: took 400.022917ms for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.414329    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.613726    4727 request.go:629] Waited for 199.367083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613754    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613757    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.613760    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.613763    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.615829    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.813718    4727 request.go:629] Waited for 197.566667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813747    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.813754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.813756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.815391    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.815670    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.815679    4727 pod_ready.go:81] duration metric: took 401.357791ms for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.815685    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.013744    4727 request.go:629] Waited for 198.028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013777    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013780    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.013783    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.013785    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.015358    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.213717    4727 request.go:629] Waited for 197.87625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213750    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213772    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.213776    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.213779    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.215177    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.215486    4727 pod_ready.go:92] pod "kube-proxy-99sn4" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.215494    4727 pod_ready.go:81] duration metric: took 399.816291ms for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.215499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.412543    4727 request.go:629] Waited for 197.022333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412572    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412576    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.412580    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.412582    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.414200    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.613688    4727 request.go:629] Waited for 199.188292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613723    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613734    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.613738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.613740    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.616115    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:24.616487    4727 pod_ready.go:92] pod "kube-proxy-jxnv9" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.616495    4727 pod_ready.go:81] duration metric: took 401.003958ms for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.616499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.811999    4727 request.go:629] Waited for 195.4745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812037    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812040    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.812044    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.812046    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.813599    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.013712    4727 request.go:629] Waited for 199.880375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013743    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013746    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.013750    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.013752    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.015408    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.015677    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.015685    4727 pod_ready.go:81] duration metric: took 399.1935ms for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.015689    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.213690    4727 request.go:629] Waited for 197.964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213729    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213735    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.213739    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.213741    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.215582    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.413674    4727 request.go:629] Waited for 197.841584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413700    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413702    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.413714    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.413717    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.415433    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.415627    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.415633    4727 pod_ready.go:81] duration metric: took 399.951542ms for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.415638    4727 pod_ready.go:38] duration metric: took 3.201238458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:25.415647    4727 api_server.go:52] waiting for apiserver process to appear ...
	I0718 20:38:25.415719    4727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:38:25.421413    4727 api_server.go:72] duration metric: took 21.843316333s to wait for apiserver process to appear ...
	I0718 20:38:25.421422    4727 api_server.go:88] waiting for apiserver healthz status ...
	I0718 20:38:25.421429    4727 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0718 20:38:25.424174    4727 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0718 20:38:25.424198    4727 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0718 20:38:25.424200    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.424204    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.424207    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.424682    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:38:25.424723    4727 api_server.go:141] control plane version: v1.30.3
	I0718 20:38:25.424729    4727 api_server.go:131] duration metric: took 3.305084ms to wait for apiserver health ...
	I0718 20:38:25.424732    4727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0718 20:38:25.613673    4727 request.go:629] Waited for 188.916583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613714    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.613721    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.613723    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.616608    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:25.620463    4727 system_pods.go:59] 17 kube-system pods found
	I0718 20:38:25.620472    4727 system_pods.go:61] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:25.620475    4727 system_pods.go:61] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:25.620477    4727 system_pods.go:61] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:25.620479    4727 system_pods.go:61] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:25.620480    4727 system_pods.go:61] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:25.620482    4727 system_pods.go:61] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:25.620484    4727 system_pods.go:61] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:25.620486    4727 system_pods.go:61] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:25.620488    4727 system_pods.go:61] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:25.620490    4727 system_pods.go:61] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:25.620492    4727 system_pods.go:61] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:25.620493    4727 system_pods.go:61] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:25.620495    4727 system_pods.go:61] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:25.620497    4727 system_pods.go:61] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:25.620498    4727 system_pods.go:61] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:25.620500    4727 system_pods.go:61] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:25.620502    4727 system_pods.go:61] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:25.620505    4727 system_pods.go:74] duration metric: took 195.775375ms to wait for pod list to return data ...
	I0718 20:38:25.620509    4727 default_sa.go:34] waiting for default service account to be created ...
	I0718 20:38:25.813683    4727 request.go:629] Waited for 193.137584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813709    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813712    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.813716    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.813721    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.815354    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.815466    4727 default_sa.go:45] found service account: "default"
	I0718 20:38:25.815474    4727 default_sa.go:55] duration metric: took 194.966875ms for default service account to be created ...
	I0718 20:38:25.815479    4727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0718 20:38:26.013652    4727 request.go:629] Waited for 198.147166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.013695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.013702    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.016448    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:26.020596    4727 system_pods.go:86] 17 kube-system pods found
	I0718 20:38:26.020604    4727 system_pods.go:89] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:26.020607    4727 system_pods.go:89] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:26.020609    4727 system_pods.go:89] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:26.020611    4727 system_pods.go:89] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:26.020613    4727 system_pods.go:89] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:26.020615    4727 system_pods.go:89] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:26.020617    4727 system_pods.go:89] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:26.020619    4727 system_pods.go:89] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:26.020621    4727 system_pods.go:89] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:26.020622    4727 system_pods.go:89] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:26.020624    4727 system_pods.go:89] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:26.020626    4727 system_pods.go:89] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:26.020628    4727 system_pods.go:89] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:26.020629    4727 system_pods.go:89] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:26.020631    4727 system_pods.go:89] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:26.020633    4727 system_pods.go:89] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:26.020635    4727 system_pods.go:89] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:26.020641    4727 system_pods.go:126] duration metric: took 205.165291ms to wait for k8s-apps to be running ...
	I0718 20:38:26.020645    4727 system_svc.go:44] waiting for kubelet service to be running ....
	I0718 20:38:26.020720    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:38:26.027026    4727 system_svc.go:56] duration metric: took 6.37875ms WaitForService to wait for kubelet
	I0718 20:38:26.027036    4727 kubeadm.go:582] duration metric: took 22.448955791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:38:26.027047    4727 node_conditions.go:102] verifying NodePressure condition ...
	I0718 20:38:26.213670    4727 request.go:629] Waited for 186.592667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213748    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213751    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.213756    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.213758    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.215369    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:26.215702    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215710    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215716    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215719    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215721    4727 node_conditions.go:105] duration metric: took 188.677125ms to run NodePressure ...
	I0718 20:38:26.215733    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:38:26.215747    4727 start.go:255] writing updated cluster config ...
	I0718 20:38:26.221138    4727 out.go:177] 
	I0718 20:38:26.225195    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:26.225251    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.230070    4727 out.go:177] * Starting "ha-256000-m03" control-plane node in "ha-256000" cluster
	I0718 20:38:26.238085    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:38:26.238092    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:38:26.238177    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:38:26.238184    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:38:26.238226    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.238529    4727 start.go:360] acquireMachinesLock for ha-256000-m03: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:38:26.238563    4727 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "ha-256000-m03"
	I0718 20:38:26.238573    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:26.238613    4727 start.go:125] createHost starting for "m03" (driver="qemu2")
	I0718 20:38:26.243026    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:38:26.268172    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:38:26.268206    4727 client.go:168] LocalClient.Create starting
	I0718 20:38:26.268290    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:38:26.268328    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268338    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268376    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:38:26.268399    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268406    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268691    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:38:26.426584    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:38:26.572781    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:38:26.572789    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:38:26.573022    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.588299    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.588321    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.588408    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2 +20000M
	I0718 20:38:26.597072    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:38:26.597089    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.597102    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.597113    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:38:26.597129    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:38:26.597163    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:7f:0e:0c:6d:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.641473    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.641500    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.641504    4727 main.go:141] libmachine: Attempt 0
	I0718 20:38:26.641520    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:26.641735    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:26.641749    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:26.641756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:26.641761    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:26.641765    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:26.641770    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:26.641776    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:28.643878    4727 main.go:141] libmachine: Attempt 1
	I0718 20:38:28.643913    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:28.644011    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:28.644023    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:28.644028    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:28.644032    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:28.644036    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:28.644046    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:28.644052    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:30.646081    4727 main.go:141] libmachine: Attempt 2
	I0718 20:38:30.646120    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:30.646235    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:30.646244    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:30.646250    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:30.646254    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:30.646258    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:30.646262    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:30.646267    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:32.648349    4727 main.go:141] libmachine: Attempt 3
	I0718 20:38:32.648374    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:32.648466    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:32.648477    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:32.648481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:32.648486    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:32.648497    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:32.648501    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:32.648514    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:34.650548    4727 main.go:141] libmachine: Attempt 4
	I0718 20:38:34.650566    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:34.650664    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:34.650674    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:34.650678    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:34.650682    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:34.650686    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:34.650692    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:34.650696    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:36.652758    4727 main.go:141] libmachine: Attempt 5
	I0718 20:38:36.652796    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:36.652971    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:36.652995    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:36.653008    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:36.653088    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:36.653108    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:36.653113    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:36.653119    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:38.654089    4727 main.go:141] libmachine: Attempt 6
	I0718 20:38:38.654205    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:38.654304    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:38.654315    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:38.654320    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:38.654329    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:38.654333    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:38.654338    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:38.654343    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:40.656398    4727 main.go:141] libmachine: Attempt 7
	I0718 20:38:40.656425    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:40.656535    4727 main.go:141] libmachine: Found 7 entries in /var/db/dhcpd_leases!
	I0718 20:38:40.656552    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:d2:7f:e:c:6d:ba ID:1,d2:7f:e:c:6d:ba Lease:0x669b313f}
	I0718 20:38:40.656554    4727 main.go:141] libmachine: Found match: d2:7f:e:c:6d:ba
	I0718 20:38:40.656561    4727 main.go:141] libmachine: IP: 192.168.105.7
	I0718 20:38:40.656567    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.7)...
	I0718 20:38:49.679874    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:38:49.680098    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.680386    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.680393    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:38:49.720341    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:38:49.720352    4727 buildroot.go:166] provisioning hostname "ha-256000-m03"
	I0718 20:38:49.720396    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.720501    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.720507    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m03 && echo "ha-256000-m03" | sudo tee /etc/hostname
	I0718 20:38:49.765619    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m03
	
	I0718 20:38:49.765691    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.765821    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.765830    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:38:49.809445    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:38:49.809457    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:38:49.809463    4727 buildroot.go:174] setting up certificates
	I0718 20:38:49.809467    4727 provision.go:84] configureAuth start
	I0718 20:38:49.809471    4727 provision.go:143] copyHostCerts
	I0718 20:38:49.809497    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809560    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:38:49.809567    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809680    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:38:49.810515    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810551    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:38:49.810554    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810618    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:38:49.810856    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810884    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:38:49.810888    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810942    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:38:49.811128    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m03 san=[127.0.0.1 192.168.105.7 ha-256000-m03 localhost minikube]
	I0718 20:38:49.892392    4727 provision.go:177] copyRemoteCerts
	I0718 20:38:49.892426    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:38:49.892435    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:49.917004    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:38:49.917069    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:38:49.925760    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:38:49.925809    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:38:49.934495    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:38:49.934547    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:38:49.944465    4727 provision.go:87] duration metric: took 134.994083ms to configureAuth
	I0718 20:38:49.944477    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:38:49.946418    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:49.946460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.946554    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.946559    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:38:49.988863    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:38:49.988874    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:38:49.988957    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:38:49.989005    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.989117    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.989151    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	Environment="NO_PROXY=192.168.105.5,192.168.105.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:38:50.033434    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	Environment=NO_PROXY=192.168.105.5,192.168.105.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:38:50.033494    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:50.033609    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:50.033618    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:38:51.357934    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:38:51.357948    4727 machine.go:97] duration metric: took 1.678110291s to provisionDockerMachine
	I0718 20:38:51.357955    4727 client.go:171] duration metric: took 25.090436s to LocalClient.Create
	I0718 20:38:51.357970    4727 start.go:167] duration metric: took 25.090492834s to libmachine.API.Create "ha-256000"
	I0718 20:38:51.357987    4727 start.go:293] postStartSetup for "ha-256000-m03" (driver="qemu2")
	I0718 20:38:51.357993    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:38:51.358064    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:38:51.358075    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.383362    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:38:51.385220    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:38:51.385229    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:38:51.385339    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:38:51.385460    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:38:51.385466    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:38:51.385589    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:38:51.389076    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:38:51.397667    4727 start.go:296] duration metric: took 39.676333ms for postStartSetup
	I0718 20:38:51.398148    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:51.398353    4727 start.go:128] duration metric: took 25.1604295s to createHost
	I0718 20:38:51.398381    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:51.398475    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:51.398479    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:38:51.443684    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360331.726119547
	
	I0718 20:38:51.443697    4727 fix.go:216] guest clock: 1721360331.726119547
	I0718 20:38:51.443701    4727 fix.go:229] Guest: 2024-07-18 20:38:51.726119547 -0700 PDT Remote: 2024-07-18 20:38:51.39836 -0700 PDT m=+164.266937085 (delta=327.759547ms)
	I0718 20:38:51.443713    4727 fix.go:200] guest clock delta is within tolerance: 327.759547ms
	I0718 20:38:51.443716    4727 start.go:83] releasing machines lock for "ha-256000-m03", held for 25.205843709s
	I0718 20:38:51.447883    4727 out.go:177] * Found network options:
	I0718 20:38:51.451892    4727 out.go:177]   - NO_PROXY=192.168.105.5,192.168.105.6
	W0718 20:38:51.455815    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.455829    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456208    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456223    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:38:51.456298    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:38:51.456327    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	W0718 20:38:51.479804    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:38:51.479862    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:38:51.524774    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:38:51.524786    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.524847    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.531855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:38:51.535855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:38:51.539545    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.539580    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:38:51.543520    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.547437    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:38:51.551284    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.555870    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:38:51.559926    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:38:51.563772    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:38:51.567972    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:38:51.572324    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:38:51.576791    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:38:51.580307    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.641726    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:38:51.654538    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.654606    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:38:51.661500    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.671940    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:38:51.683005    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.689286    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.694846    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:38:51.739658    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.745604    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.752465    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:38:51.754039    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:38:51.757754    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:38:51.764400    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:38:51.833658    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:38:51.901993    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.902021    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:38:51.910153    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.983567    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:39:53.221259    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.239360917s)
	I0718 20:39:53.221338    4727 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0718 20:39:53.233907    4727 out.go:177] 
	W0718 20:39:53.237861    4727 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:38:50 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531478880Z" level=info msg="Starting up"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531868672Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.532448547Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=532
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.550167964Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560007672Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560035005Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560063505Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560074839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560111130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560123547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560217922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560230922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560237130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560241589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560270464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560366505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561097130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561114380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561185047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561197839Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561245172Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561280130Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563923422Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563946005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563952880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563959547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563972505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564012380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564132589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564175464Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564185714Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564191797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564197839Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564204005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564210464Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564216297Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564222297Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564228089Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564233922Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564239422Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564256255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564264589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564270589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564276339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564281380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564287547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564292755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564298214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564303922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564310047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564315047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564320255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564325630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564332547Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564341589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564346797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564352089Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564402380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564416755Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564421630Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564427380Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564432047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564437755Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564467089Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564611964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564632964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564646839Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564655005Z" level=info msg="containerd successfully booted in 0.014823s"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.553636672Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.561497047Z" level=info msg="Loading containers: start."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.589775631Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.620757631Z" level=info msg="Loading containers: done."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624562881Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624599339Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:38:51 ha-256000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641454297Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641495839Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:38:52 ha-256000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.265389656Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266153693Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266192011Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266216137Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266284865Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:53 ha-256000-m03 dockerd[931]: time="2024-07-19T03:38:53.282812481Z" level=info msg="Starting up"
	Jul 19 03:39:53 ha-256000-m03 dockerd[931]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0718 20:39:53.237915    4727 out.go:239] * 
	W0718 20:39:53.239556    4727 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 20:39:53.244752    4727 out.go:177] 
	
	
	==> Docker <==
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62c92a2e03424d74abec35244521f1b7761982d7dbb7311513fb13f822c225ed/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f20cc01dd922b82b1ee5c6472024624755b1340ebceab21cf25c6eacf6e19c4/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5db9ae745b118ebe428663f3f1c8c679cdc1a26cea72ee6016f951ae34fc28ea/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858940540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858976718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858984229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.859018904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.861914444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.861992224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.862003156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.862051518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889214398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889287171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889293388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889346507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061800448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061853702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061875454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061930291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:39:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a81719e2049682e90e011b40424dd53e2ae913d00000287c821ac163206c9b20/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 03:39:56 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:39:56Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404399110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404453937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404462477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404689325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf6fa4236c452       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   a81719e204968       busybox-fc5497c4f-5922h
	6dfd469e7d36e       ba04bb24b9575                                                                                         14 minutes ago      Running             storage-provisioner       0                   5db9ae745b118       storage-provisioner
	1097379f4f6cb       2437cf7621777                                                                                         14 minutes ago      Running             coredns                   0                   62c92a2e03424       coredns-7db6d8ff4d-gl7wn
	9a1c088f8966e       2437cf7621777                                                                                         14 minutes ago      Running             coredns                   0                   5f20cc01dd922       coredns-7db6d8ff4d-t5fk7
	74fc7ee221313       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              14 minutes ago      Running             kindnet-cni               0                   f7fb0ae46c979       kindnet-znvgn
	9103cd3e30ac5       2351f570ed0ea                                                                                         14 minutes ago      Running             kube-proxy                0                   dd4c5c6f3ce08       kube-proxy-jxnv9
	8128016ed9c34       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     14 minutes ago      Running             kube-vip                  0                   e405a8655e904       kube-vip-ha-256000
	d5ff116ccff16       014faa467e297                                                                                         15 minutes ago      Running             etcd                      0                   1dd441769aa2a       etcd-ha-256000
	29f96bba40d3a       d48f992a22722                                                                                         15 minutes ago      Running             kube-scheduler            0                   aa59c4a58dba5       kube-scheduler-ha-256000
	70ffd55232c0b       8e97cdb19e7cc                                                                                         15 minutes ago      Running             kube-controller-manager   0                   96446dab38e98       kube-controller-manager-ha-256000
	dff4e67b66806       61773190d42ff                                                                                         15 minutes ago      Running             kube-apiserver            0                   877c87b7df476       kube-apiserver-ha-256000
	
	
	==> coredns [1097379f4f6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37765 - 42644 "HINFO IN 3312804127670044151.9315725327003923. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.009474143s
	[INFO] 10.244.0.4:33989 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.044131336s
	[INFO] 10.244.0.4:49979 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001205888s
	[INFO] 10.244.1.2:54862 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000064045s
	[INFO] 10.244.0.4:54057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097379s
	[INFO] 10.244.0.4:39996 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065545s
	[INFO] 10.244.0.4:39732 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063878s
	[INFO] 10.244.1.2:57277 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070961s
	[INFO] 10.244.1.2:44544 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00059536s
	[INFO] 10.244.1.2:33879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000042043s
	[INFO] 10.244.1.2:41170 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039002s
	[INFO] 10.244.0.4:32818 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000023751s
	[INFO] 10.244.0.4:44658 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000027251s
	[INFO] 10.244.1.2:36566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093796s
	[INFO] 10.244.1.2:41685 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000035752s
	[INFO] 10.244.1.2:36603 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000019667s
	
	
	==> coredns [9a1c088f8966] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42392 - 40278 "HINFO IN 2632545797447059373.9195703630793318012. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009665964s
	[INFO] 10.244.0.4:39096 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234719s
	[INFO] 10.244.0.4:39212 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010352553s
	[INFO] 10.244.1.2:39974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082254s
	[INFO] 10.244.1.2:48244 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00062732s
	[INFO] 10.244.1.2:44600 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000022126s
	[INFO] 10.244.0.4:43528 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001761788s
	[INFO] 10.244.0.4:39922 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072504s
	[INFO] 10.244.0.4:40557 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054253s
	[INFO] 10.244.0.4:36599 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000831538s
	[INFO] 10.244.0.4:35378 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072337s
	[INFO] 10.244.1.2:45376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082296s
	[INFO] 10.244.1.2:55926 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000027209s
	[INFO] 10.244.1.2:50938 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000031001s
	[INFO] 10.244.1.2:32874 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004696s
	[INFO] 10.244.0.4:39411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067337s
	[INFO] 10.244.0.4:56069 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000028543s
	[INFO] 10.244.1.2:60061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076628s
	
	
	==> describe nodes <==
	Name:               ha-256000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_18T20_36_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:36:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:51:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:37:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    ha-256000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d710ce1e1896426084c421362e18dda0
	  System UUID:                d710ce1e1896426084c421362e18dda0
	  Boot ID:                    83486cc1-e7b0-4568-bb5a-c46474de14e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5922h              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-gl7wn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-t5fk7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-256000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-znvgn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-256000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-256000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-jxnv9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-256000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-256000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node ha-256000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node ha-256000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node ha-256000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node ha-256000 event: Registered Node ha-256000 in Controller
	  Normal  NodeReady                14m   kubelet          Node ha-256000 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node ha-256000 event: Registered Node ha-256000 in Controller
	
	
	Name:               ha-256000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_18T20_38_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:38:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:51:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ha-256000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 b10ac96f2bdf4ee3ad1f9ba82eb39a4e
	  System UUID:                b10ac96f2bdf4ee3ad1f9ba82eb39a4e
	  Boot ID:                    b548924b-9c86-4ba2-9a9e-2e5cc7830327
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bqdhb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-256000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-2mvfm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-256000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-256000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-99sn4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-256000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-256000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-256000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-256000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-256000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-256000-m02 event: Registered Node ha-256000-m02 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-256000-m02 event: Registered Node ha-256000-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.650707] EINJ: EINJ table not found.
	[  +0.549800] systemd-fstab-generator[117]: Ignoring "noauto" option for root device
	[  +0.136927] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000360] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db
	[  +3.624626] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +0.080461] systemd-fstab-generator[508]: Ignoring "noauto" option for root device
	[  +0.034842] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.469016] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.194273] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.081032] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.086446] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +2.293076] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.088824] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.085311] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.095642] systemd-fstab-generator[1171]: Ignoring "noauto" option for root device
	[  +2.542348] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.036994] kauditd_printk_skb: 257 callbacks suppressed
	[  +2.330914] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +2.194691] systemd-fstab-generator[1695]: Ignoring "noauto" option for root device
	[  +0.779104] kauditd_printk_skb: 104 callbacks suppressed
	[  +3.727432] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[ +15.155229] kauditd_printk_skb: 62 callbacks suppressed
	[Jul19 03:37] kauditd_printk_skb: 29 callbacks suppressed
	[Jul19 03:38] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d5ff116ccff1] <==
	{"level":"info","ts":"2024-07-19T03:38:02.849603Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"58de0efec1d86300","remote-peer-id":"dcb4f5dcb4017fbf"}
	{"level":"info","ts":"2024-07-19T03:38:02.851115Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"58de0efec1d86300","to":"dcb4f5dcb4017fbf","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-19T03:38:02.851146Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"58de0efec1d86300","remote-peer-id":"dcb4f5dcb4017fbf"}
	{"level":"info","ts":"2024-07-19T03:38:03.239361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856 15903606512413671359)"}
	{"level":"info","ts":"2024-07-19T03:38:03.239499Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300"}
	{"level":"info","ts":"2024-07-19T03:38:03.239512Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"58de0efec1d86300","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"dcb4f5dcb4017fbf"}
	{"level":"warn","ts":"2024-07-19T03:38:38.860449Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":7133861002988229904,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-07-19T03:38:39.213772Z","caller":"traceutil/trace.go:171","msg":"trace[213955580] linearizableReadLoop","detail":"{readStateIndex:773; appliedIndex:773; }","duration":"854.090297ms","start":"2024-07-19T03:38:38.359661Z","end":"2024-07-19T03:38:39.213752Z","steps":["trace[213955580] 'read index received'  (duration: 854.085672ms)","trace[213955580] 'applied index is now lower than readState.Index'  (duration: 1.458µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T03:38:39.214653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"854.964275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.105.5\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-19T03:38:39.214668Z","caller":"traceutil/trace.go:171","msg":"trace[64905690] range","detail":"{range_begin:/registry/masterleases/192.168.105.5; range_end:; response_count:1; response_revision:726; }","duration":"855.016063ms","start":"2024-07-19T03:38:38.359648Z","end":"2024-07-19T03:38:39.214664Z","steps":["trace[64905690] 'agreement among raft nodes before linearized reading'  (duration: 854.846409ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.214698Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.359622Z","time spent":"855.063476ms","remote":"127.0.0.1:50924","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":156,"request content":"key:\"/registry/masterleases/192.168.105.5\" "}
	{"level":"warn","ts":"2024-07-19T03:38:39.217551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.784693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:38:39.217629Z","caller":"traceutil/trace.go:171","msg":"trace[485073674] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:726; }","duration":"181.858104ms","start":"2024-07-19T03:38:39.035755Z","end":"2024-07-19T03:38:39.217613Z","steps":["trace[485073674] 'agreement among raft nodes before linearized reading'  (duration: 181.775735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.218131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.961025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-07-19T03:38:39.218206Z","caller":"traceutil/trace.go:171","msg":"trace[1437088211] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:726; }","duration":"362.976608ms","start":"2024-07-19T03:38:38.855164Z","end":"2024-07-19T03:38:39.218141Z","steps":["trace[1437088211] 'agreement among raft nodes before linearized reading'  (duration: 362.940194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.218228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.855138Z","time spent":"363.085141ms","remote":"127.0.0.1:51114","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":457,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" "}
	{"level":"warn","ts":"2024-07-19T03:38:39.219731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.350481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:38:39.21976Z","caller":"traceutil/trace.go:171","msg":"trace[1532987535] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:726; }","duration":"513.381938ms","start":"2024-07-19T03:38:38.706374Z","end":"2024-07-19T03:38:39.219756Z","steps":["trace[1532987535] 'agreement among raft nodes before linearized reading'  (duration: 509.325689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.219771Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.706284Z","time spent":"513.484013ms","remote":"127.0.0.1:50868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-19T03:46:36.540686Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1175}
	{"level":"info","ts":"2024-07-19T03:46:36.562489Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1175,"took":"20.474469ms","hash":3930648337,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1482752,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-19T03:46:36.562693Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3930648337,"revision":1175,"compact-revision":-1}
	{"level":"info","ts":"2024-07-19T03:51:36.54679Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1806}
	{"level":"info","ts":"2024-07-19T03:51:36.56014Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1806,"took":"13.081219ms","hash":2540466080,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1347584,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2024-07-19T03:51:36.560169Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2540466080,"revision":1806,"compact-revision":1175}
	
	
	==> kernel <==
	 03:51:37 up 15 min,  0 users,  load average: 0.05, 0.11, 0.09
	Linux ha-256000 5.10.207 #1 SMP PREEMPT Thu Jul 18 19:24:21 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [74fc7ee22131] <==
	I0719 03:50:29.210276       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:50:39.212130       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:50:39.212151       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:50:39.212286       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:50:39.212297       1 main.go:303] handling current node
	I0719 03:50:49.215283       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:50:49.215305       1 main.go:303] handling current node
	I0719 03:50:49.215314       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:50:49.215317       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:50:59.210139       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:50:59.210159       1 main.go:303] handling current node
	I0719 03:50:59.210171       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:50:59.210174       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:51:09.209774       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:51:09.209793       1 main.go:303] handling current node
	I0719 03:51:09.209802       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:51:09.209805       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:51:19.211720       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:51:19.211738       1 main.go:303] handling current node
	I0719 03:51:19.211748       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:51:19.211751       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:51:29.209881       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:51:29.209911       1 main.go:303] handling current node
	I0719 03:51:29.209921       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:51:29.209924       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [dff4e67b6680] <==
	I0719 03:36:37.268298       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 03:36:38.152171       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0719 03:36:38.153736       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0719 03:36:38.153745       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 03:36:38.302580       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 03:36:38.313862       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 03:36:38.355728       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0719 03:36:38.357891       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0719 03:36:38.358258       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 03:36:38.359450       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 03:36:39.162576       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 03:36:39.259455       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 03:36:39.263308       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 03:36:39.266876       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 03:36:53.692820       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 03:36:53.723447       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0719 03:38:39.230077       1 trace.go:236] Trace[99535700]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.105.5,type:*v1.Endpoints,resource:apiServerIPInfo (19-Jul-2024 03:38:38.359) (total time: 870ms):
	Trace[99535700]: ---"initial value restored" 856ms (03:38:39.216)
	Trace[99535700]: [870.770259ms] [870.770259ms] END
	E0719 03:51:35.729254       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50022: use of closed network connection
	E0719 03:51:35.841233       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50024: use of closed network connection
	E0719 03:51:36.030728       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50029: use of closed network connection
	E0719 03:51:36.142429       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50031: use of closed network connection
	E0719 03:51:36.323525       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50036: use of closed network connection
	E0719 03:51:36.429306       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50038: use of closed network connection
	
	
	==> kube-controller-manager [70ffd55232c0] <==
	I0719 03:36:54.412561       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 03:36:54.412576       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 03:37:22.400084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.747µs"
	I0719 03:37:22.402636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.4µs"
	I0719 03:37:22.408319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.037µs"
	I0719 03:37:22.415741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.491µs"
	I0719 03:37:23.262808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="33.25µs"
	I0719 03:37:23.279353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.239521ms"
	I0719 03:37:23.279510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.085µs"
	I0719 03:37:23.294158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.586299ms"
	I0719 03:37:23.294186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.391µs"
	I0719 03:37:23.772649       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0719 03:38:01.950412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-256000-m02\" does not exist"
	I0719 03:38:01.956739       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-256000-m02" podCIDRs=["10.244.1.0/24"]
	I0719 03:38:03.779798       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-256000-m02"
	I0719 03:39:54.715082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.549011ms"
	I0719 03:39:54.728524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.544471ms"
	I0719 03:39:54.760521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.962639ms"
	I0719 03:39:54.798120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.556155ms"
	I0719 03:39:54.810232       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.068766ms"
	I0719 03:39:54.810338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.794µs"
	I0719 03:39:56.791240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.855498ms"
	I0719 03:39:56.791390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.29µs"
	I0719 03:39:57.235525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.740732ms"
	I0719 03:39:57.236806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.25502ms"
	
	
	==> kube-proxy [9103cd3e30ac] <==
	I0719 03:36:54.228395       1 server_linux.go:69] "Using iptables proxy"
	I0719 03:36:54.235224       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.5"]
	I0719 03:36:54.286000       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 03:36:54.286028       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 03:36:54.286039       1 server_linux.go:165] "Using iptables Proxier"
	I0719 03:36:54.287034       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 03:36:54.287396       1 server.go:872] "Version info" version="v1.30.3"
	I0719 03:36:54.287403       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:36:54.288184       1 config.go:192] "Starting service config controller"
	I0719 03:36:54.288259       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 03:36:54.288280       1 config.go:319] "Starting node config controller"
	I0719 03:36:54.288282       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 03:36:54.289304       1 config.go:101] "Starting endpoint slice config controller"
	I0719 03:36:54.289308       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 03:36:54.388688       1 shared_informer.go:320] Caches are synced for node config
	I0719 03:36:54.388711       1 shared_informer.go:320] Caches are synced for service config
	I0719 03:36:54.389972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [29f96bba40d3] <==
	W0719 03:36:37.216385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:37.216388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:37.216419       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 03:36:37.216424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 03:36:37.216440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 03:36:37.216444       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 03:36:37.216461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 03:36:37.216464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 03:36:37.216476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 03:36:37.216491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 03:36:37.216504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:37.216507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.043369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 03:36:38.043491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 03:36:38.078796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:38.078841       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.135286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:38.135302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.143595       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 03:36:38.143607       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 03:36:40.612937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 03:39:54.727744       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5922h\": pod busybox-fc5497c4f-5922h is already assigned to node \"ha-256000\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-5922h" node="ha-256000"
	E0719 03:39:54.727817       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1bb5b7eb-c669-43f7-ac3f-753596620b94(default/busybox-fc5497c4f-5922h) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-5922h"
	E0719 03:39:54.727832       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5922h\": pod busybox-fc5497c4f-5922h is already assigned to node \"ha-256000\"" pod="default/busybox-fc5497c4f-5922h"
	I0719 03:39:54.727844       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-5922h" node="ha-256000"
	
	
	==> kubelet <==
	Jul 19 03:46:39 ha-256000 kubelet[2215]: E0719 03:46:39.080023    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:46:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:46:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:46:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:46:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:47:39 ha-256000 kubelet[2215]: E0719 03:47:39.079617    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:47:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:48:39 ha-256000 kubelet[2215]: E0719 03:48:39.080370    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:48:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:49:39 ha-256000 kubelet[2215]: E0719 03:49:39.079647    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:49:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:50:39 ha-256000 kubelet[2215]: E0719 03:50:39.079658    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:50:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ha-256000 -n ha-256000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-256000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-hkhd4
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-256000 describe pod busybox-fc5497c4f-hkhd4
helpers_test.go:282: (dbg) kubectl --context ha-256000 describe pod busybox-fc5497c4f-hkhd4:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-hkhd4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f6vj6 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-f6vj6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  11m                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  92s (x2 over 6m32s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  86s (x3 over 11m)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (703.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-5922h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-5922h -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-bqdhb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-bqdhb -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-hkhd4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-256000 -- exec busybox-fc5497c4f-hkhd4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (79.836875ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-hkhd4 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-fc5497c4f-hkhd4 could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ha-256000 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:49 PDT | 18 Jul 24 20:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.105.1           |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.105.1           |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:36:07
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:36:07.154539    4727 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:36:07.154652    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154655    4727 out.go:304] Setting ErrFile to fd 2...
	I0718 20:36:07.154657    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154787    4727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:36:07.155777    4727 out.go:298] Setting JSON to false
	I0718 20:36:07.172062    4727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2135,"bootTime":1721358032,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:36:07.172136    4727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:36:07.175769    4727 out.go:177] * [ha-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 20:36:07.182867    4727 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:36:07.182897    4727 notify.go:220] Checking for updates...
	I0718 20:36:07.188814    4727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:07.191895    4727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:36:07.192950    4727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:36:07.195871    4727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 20:36:07.198897    4727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:36:07.202011    4727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:36:07.205826    4727 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 20:36:07.212869    4727 start.go:297] selected driver: qemu2
	I0718 20:36:07.212875    4727 start.go:901] validating driver "qemu2" against <nil>
	I0718 20:36:07.212880    4727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:36:07.215027    4727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:36:07.217921    4727 out.go:177] * Automatically selected the socket_vmnet network
	I0718 20:36:07.220933    4727 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:36:07.220960    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:07.220968    4727 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0718 20:36:07.220971    4727 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0718 20:36:07.220995    4727 start.go:340] cluster config:
	{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:07.224405    4727 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:36:07.231878    4727 out.go:177] * Starting "ha-256000" primary control-plane node in "ha-256000" cluster
	I0718 20:36:07.235849    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:07.235880    4727 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 20:36:07.235892    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:07.235960    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:07.235965    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:07.236167    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:07.236181    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json: {Name:mk4f96c33b167a65b92bd4e48e5f1a3c7a52bbe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:07.236387    4727 start.go:360] acquireMachinesLock for ha-256000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:07.236422    4727 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "ha-256000"
	I0718 20:36:07.236432    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:07.236461    4727 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 20:36:07.243901    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:07.268930    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:07.268958    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:07.269026    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:07.269056    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269065    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269104    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:07.269127    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269136    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269466    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:07.395393    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:07.434010    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:07.434014    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:07.434195    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.445169    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.445186    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.445241    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2 +20000M
	I0718 20:36:07.453205    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:07.453220    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.453236    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.453239    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:07.453248    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:07.453278    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:e3:ed:16:92:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.491921    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.491947    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.491951    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:07.491963    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:07.492029    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:07.492048    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:07.492054    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:07.492061    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:07.492067    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:09.494175    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:09.494254    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:09.494618    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:09.494729    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:09.494764    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:09.494789    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:09.494817    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:11.496994    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:11.497242    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:11.497663    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:11.497717    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:11.497756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:11.497787    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:11.497819    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:13.500006    4727 main.go:141] libmachine: Attempt 3
	I0718 20:36:13.500080    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:13.500185    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:13.500200    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:13.500205    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:13.500210    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:13.500216    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:15.502208    4727 main.go:141] libmachine: Attempt 4
	I0718 20:36:15.502220    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:15.502255    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:15.502275    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:15.502280    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:15.502285    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:15.502290    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:17.504286    4727 main.go:141] libmachine: Attempt 5
	I0718 20:36:17.504293    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:17.504346    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:17.504356    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:17.504360    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:17.504364    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:17.504369    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:19.506369    4727 main.go:141] libmachine: Attempt 6
	I0718 20:36:19.506395    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:19.506467    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:19.506476    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:19.506481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:19.506485    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:19.506490    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:21.508527    4727 main.go:141] libmachine: Attempt 7
	I0718 20:36:21.508554    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:21.508694    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:21.508708    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:21.508719    4727 main.go:141] libmachine: Found match: 6a:e3:ed:16:92:d5
	I0718 20:36:21.508730    4727 main.go:141] libmachine: IP: 192.168.105.5
	I0718 20:36:21.508735    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0718 20:36:22.527247    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:36:22.527480    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.527975    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.527990    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:36:22.610697    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:36:22.610726    4727 buildroot.go:166] provisioning hostname "ha-256000"
	I0718 20:36:22.610824    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.611097    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.611107    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000 && echo "ha-256000" | sudo tee /etc/hostname
	I0718 20:36:22.682492    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000
	
	I0718 20:36:22.682552    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.682702    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.682713    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:36:22.742479    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:36:22.742492    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:36:22.742500    4727 buildroot.go:174] setting up certificates
	I0718 20:36:22.742504    4727 provision.go:84] configureAuth start
	I0718 20:36:22.742508    4727 provision.go:143] copyHostCerts
	I0718 20:36:22.742542    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742586    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:36:22.742593    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742831    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:36:22.743010    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743030    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:36:22.743033    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743097    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:36:22.743184    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743212    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:36:22.743215    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743275    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:36:22.743373    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000 san=[127.0.0.1 192.168.105.5 ha-256000 localhost minikube]
	I0718 20:36:22.831924    4727 provision.go:177] copyRemoteCerts
	I0718 20:36:22.831953    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:36:22.831960    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:22.861471    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:36:22.861517    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:36:22.869576    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:36:22.869616    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0718 20:36:22.877642    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:36:22.877682    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 20:36:22.885597    4727 provision.go:87] duration metric: took 143.091583ms to configureAuth
	I0718 20:36:22.885605    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:36:22.885700    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:22.885731    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.885814    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.885819    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:36:22.939257    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:36:22.939268    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:36:22.939327    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:36:22.939382    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.939495    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.939529    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:36:22.999120    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:36:22.999176    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.999299    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.999307    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:36:24.399001    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:36:24.399014    4727 machine.go:97] duration metric: took 1.871786709s to provisionDockerMachine
	I0718 20:36:24.399020    4727 client.go:171] duration metric: took 17.130530167s to LocalClient.Create
	I0718 20:36:24.399035    4727 start.go:167] duration metric: took 17.130580916s to libmachine.API.Create "ha-256000"
	I0718 20:36:24.399041    4727 start.go:293] postStartSetup for "ha-256000" (driver="qemu2")
	I0718 20:36:24.399047    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:36:24.399133    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:36:24.399144    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.429882    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:36:24.431446    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:36:24.431458    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:36:24.431559    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:36:24.431674    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:36:24.431679    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:36:24.431800    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:36:24.434949    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:24.443099    4727 start.go:296] duration metric: took 44.054208ms for postStartSetup
	I0718 20:36:24.443547    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:24.443727    4727 start.go:128] duration metric: took 17.207737166s to createHost
	I0718 20:36:24.443753    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:24.443841    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:24.443845    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:36:24.496185    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360184.183489336
	
	I0718 20:36:24.496191    4727 fix.go:216] guest clock: 1721360184.183489336
	I0718 20:36:24.496195    4727 fix.go:229] Guest: 2024-07-18 20:36:24.183489336 -0700 PDT Remote: 2024-07-18 20:36:24.44373 -0700 PDT m=+17.308254043 (delta=-260.240664ms)
	I0718 20:36:24.496206    4727 fix.go:200] guest clock delta is within tolerance: -260.240664ms
	I0718 20:36:24.496210    4727 start.go:83] releasing machines lock for "ha-256000", held for 17.260259709s
	I0718 20:36:24.496487    4727 ssh_runner.go:195] Run: cat /version.json
	I0718 20:36:24.496496    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.498161    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:36:24.498180    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.526501    4727 ssh_runner.go:195] Run: systemctl --version
	I0718 20:36:24.575612    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0718 20:36:24.577665    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:36:24.577696    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:36:24.584047    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:36:24.584056    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.584135    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.590860    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:36:24.594365    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:36:24.597804    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.597834    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:36:24.601501    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.605402    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:36:24.609279    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.613150    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:36:24.616783    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:36:24.620826    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:36:24.624868    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:36:24.628746    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:36:24.632406    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:36:24.635998    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:24.719937    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:36:24.727107    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.727172    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:36:24.734556    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.745145    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:36:24.752682    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.758405    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.763722    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:36:24.804424    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.810784    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.817505    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:36:24.818968    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:36:24.822004    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:36:24.827814    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:36:24.912234    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:36:24.993893    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.993951    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:36:25.000295    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:25.079893    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:27.267877    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.188026583s)
	I0718 20:36:27.267954    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:36:27.273388    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:36:27.280952    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.286424    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:36:27.376871    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:36:27.462186    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.546490    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:36:27.553023    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.558470    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.643444    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:36:27.668876    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:36:27.669018    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:36:27.671231    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:36:27.671271    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:36:27.672746    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:36:27.689183    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:36:27.689243    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.699313    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.710299    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:36:27.710436    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:36:27.711936    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:27.716497    4727 kubeadm.go:883] updating cluster {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0718 20:36:27.716547    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:27.716590    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:27.721193    4727 docker.go:685] Got preloaded images: 
	I0718 20:36:27.721201    4727 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0718 20:36:27.721249    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:27.725068    4727 ssh_runner.go:195] Run: which lz4
	I0718 20:36:27.726303    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0718 20:36:27.726385    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0718 20:36:27.727841    4727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0718 20:36:27.727857    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335411903 bytes)
	I0718 20:36:29.032881    4727 docker.go:649] duration metric: took 1.306555792s to copy over tarball
	I0718 20:36:29.032945    4727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0718 20:36:30.077797    4727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.044866416s)
	I0718 20:36:30.077812    4727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0718 20:36:30.092929    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:30.096929    4727 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0718 20:36:30.102897    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:30.190133    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:32.408215    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.218126791s)
	I0718 20:36:32.408325    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:32.414564    4727 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 20:36:32.414576    4727 cache_images.go:84] Images are preloaded, skipping loading
	I0718 20:36:32.414588    4727 kubeadm.go:934] updating node { 192.168.105.5 8443 v1.30.3 docker true true} ...
	I0718 20:36:32.414662    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:36:32.414717    4727 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 20:36:32.422967    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:32.422975    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:32.422989    4727 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0718 20:36:32.423001    4727 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-256000 NodeName:ha-256000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 20:36:32.423064    4727 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-256000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 20:36:32.423074    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:36:32.423127    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:36:32.430238    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:36:32.430293    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:36:32.430329    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:36:32.433734    4727 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 20:36:32.433764    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0718 20:36:32.437628    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0718 20:36:32.443760    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:36:32.449483    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0718 20:36:32.455815    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0718 20:36:32.461759    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:36:32.463168    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:32.467182    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:32.556522    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:36:32.567007    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.5
	I0718 20:36:32.567019    4727 certs.go:194] generating shared ca certs ...
	I0718 20:36:32.567029    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.567195    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:36:32.567242    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:36:32.567249    4727 certs.go:256] generating profile certs ...
	I0718 20:36:32.567287    4727 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:36:32.567299    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt with IP's: []
	I0718 20:36:32.629331    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt ...
	I0718 20:36:32.629341    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt: {Name:mkc9c3e562115edef8b85e012e81a3eb4a2cf75a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key ...
	I0718 20:36:32.629649    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key: {Name:mkb41caa35d055a2dcb04d364862addacfff33bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629781    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4
	I0718 20:36:32.629789    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.254]
	I0718 20:36:32.695617    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 ...
	I0718 20:36:32.695626    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4: {Name:mkee89910ca1db08ac083863b0e4a027ae270203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696056    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 ...
	I0718 20:36:32.696061    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4: {Name:mk8365902b4e9f071c9404629a4b35cc6ca6ebbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696198    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:36:32.696306    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:36:32.696557    4727 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:36:32.696565    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt with IP's: []
	I0718 20:36:32.762976    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt ...
	I0718 20:36:32.762980    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt: {Name:mkb3e0281e7ef362624ad24bb17cfb244b9bc171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763112    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key ...
	I0718 20:36:32.763115    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key: {Name:mkc06a04ddb3616913d2c6f5647bad25fef6f42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763224    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:36:32.763237    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:36:32.763247    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:36:32.763257    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:36:32.763268    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:36:32.763279    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:36:32.763290    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:36:32.763301    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:36:32.763382    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:36:32.763410    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:36:32.763415    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:36:32.763434    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:36:32.763451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:36:32.763468    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:36:32.763505    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:32.763524    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.763535    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.763546    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.763807    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:36:32.773281    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:36:32.781447    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:36:32.789770    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:36:32.798040    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0718 20:36:32.806232    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:36:32.814458    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:36:32.822522    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:36:32.830515    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:36:32.838566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:36:32.846581    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:36:32.854568    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 20:36:32.860769    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:36:32.863035    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:36:32.867352    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868859    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868879    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.870984    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:36:32.874504    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:36:32.878096    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879659    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879678    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.881640    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:36:32.885559    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:36:32.889461    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891114    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891133    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.893171    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:36:32.897112    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:36:32.898621    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:36:32.898660    4727 kubeadm.go:392] StartCluster: {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clus
terName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:32.898726    4727 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 20:36:32.903849    4727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 20:36:32.907545    4727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 20:36:32.910740    4727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 20:36:32.914021    4727 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 20:36:32.914030    4727 kubeadm.go:157] found existing configuration files:
	
	I0718 20:36:32.914050    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0718 20:36:32.917254    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 20:36:32.917277    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 20:36:32.920874    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0718 20:36:32.924549    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 20:36:32.924574    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 20:36:32.928189    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.931542    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 20:36:32.931572    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.934804    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0718 20:36:32.937825    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 20:36:32.937847    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 20:36:32.941208    4727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0718 20:36:32.964473    4727 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0718 20:36:32.964502    4727 kubeadm.go:310] [preflight] Running pre-flight checks
	I0718 20:36:33.010272    4727 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 20:36:33.010346    4727 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 20:36:33.010394    4727 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 20:36:33.080896    4727 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 20:36:33.088116    4727 out.go:204]   - Generating certificates and keys ...
	I0718 20:36:33.088149    4727 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0718 20:36:33.088180    4727 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0718 20:36:33.187618    4727 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0718 20:36:33.225765    4727 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0718 20:36:33.439485    4727 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0718 20:36:33.599214    4727 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0718 20:36:33.681357    4727 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0718 20:36:33.681418    4727 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.726840    4727 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0718 20:36:33.726901    4727 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.875169    4727 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0718 20:36:34.071575    4727 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0718 20:36:34.163748    4727 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0718 20:36:34.163778    4727 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 20:36:34.260583    4727 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 20:36:34.352375    4727 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0718 20:36:34.395125    4727 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 20:36:34.512349    4727 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 20:36:34.655223    4727 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 20:36:34.655381    4727 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 20:36:34.656483    4727 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 20:36:34.666848    4727 out.go:204]   - Booting up control plane ...
	I0718 20:36:34.666901    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 20:36:34.666950    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 20:36:34.666982    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 20:36:34.667031    4727 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 20:36:34.667081    4727 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 20:36:34.667103    4727 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0718 20:36:34.759306    4727 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0718 20:36:34.759350    4727 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0718 20:36:35.263383    4727 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.7975ms
	I0718 20:36:35.263624    4727 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0718 20:36:38.766721    4727 kubeadm.go:310] [api-check] The API server is healthy after 3.504642043s
	I0718 20:36:38.772139    4727 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 20:36:38.775784    4727 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 20:36:38.782114    4727 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0718 20:36:38.782191    4727 kubeadm.go:310] [mark-control-plane] Marking the node ha-256000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 20:36:38.784595    4727 kubeadm.go:310] [bootstrap-token] Using token: yv8fsh.sh51yi31jewcw15j
	I0718 20:36:38.788784    4727 out.go:204]   - Configuring RBAC rules ...
	I0718 20:36:38.788835    4727 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 20:36:38.790051    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 20:36:38.796261    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 20:36:38.797188    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 20:36:38.797986    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 20:36:38.798957    4727 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 20:36:39.169725    4727 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 20:36:39.576005    4727 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0718 20:36:40.169284    4727 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0718 20:36:40.169608    4727 kubeadm.go:310] 
	I0718 20:36:40.169641    4727 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0718 20:36:40.169646    4727 kubeadm.go:310] 
	I0718 20:36:40.169692    4727 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0718 20:36:40.169695    4727 kubeadm.go:310] 
	I0718 20:36:40.169709    4727 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0718 20:36:40.169760    4727 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 20:36:40.169794    4727 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 20:36:40.169797    4727 kubeadm.go:310] 
	I0718 20:36:40.169826    4727 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0718 20:36:40.169830    4727 kubeadm.go:310] 
	I0718 20:36:40.169856    4727 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 20:36:40.169858    4727 kubeadm.go:310] 
	I0718 20:36:40.169883    4727 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0718 20:36:40.169938    4727 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 20:36:40.169984    4727 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 20:36:40.169987    4727 kubeadm.go:310] 
	I0718 20:36:40.170044    4727 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0718 20:36:40.170090    4727 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0718 20:36:40.170093    4727 kubeadm.go:310] 
	I0718 20:36:40.170134    4727 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170222    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc \
	I0718 20:36:40.170234    4727 kubeadm.go:310] 	--control-plane 
	I0718 20:36:40.170242    4727 kubeadm.go:310] 
	I0718 20:36:40.170285    4727 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0718 20:36:40.170299    4727 kubeadm.go:310] 
	I0718 20:36:40.170351    4727 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170426    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc 
	I0718 20:36:40.170492    4727 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 20:36:40.170502    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:40.170507    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:40.176555    4727 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0718 20:36:40.183616    4727 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0718 20:36:40.185686    4727 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0718 20:36:40.185696    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0718 20:36:40.191764    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0718 20:36:40.332259    4727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 20:36:40.332307    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.332337    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000 minikube.k8s.io/updated_at=2024_07_18T20_36_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=true
	I0718 20:36:40.385331    4727 ops.go:34] apiserver oom_adj: -16
	I0718 20:36:40.385383    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.887435    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.387480    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.887395    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.387370    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.885756    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.387374    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.886101    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.386656    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.887355    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.387330    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.887331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.386668    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.886398    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.385335    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.887237    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.387224    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.887271    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.387175    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.885647    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.387168    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.887214    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.387158    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.887129    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.387127    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.887088    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.387119    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.885301    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.387061    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.453749    4727 kubeadm.go:1113] duration metric: took 14.12187225s to wait for elevateKubeSystemPrivileges
	I0718 20:36:54.453766    4727 kubeadm.go:394] duration metric: took 21.55570275s to StartCluster
	I0718 20:36:54.453776    4727 settings.go:142] acquiring lock: {Name:mk9577e2a46ebc5e017130011eb528f9fea1ed10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.453868    4727 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.454239    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.454483    4727 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.454492    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:36:54.454494    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0718 20:36:54.454496    4727 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 20:36:54.454530    4727 addons.go:69] Setting storage-provisioner=true in profile "ha-256000"
	I0718 20:36:54.454533    4727 addons.go:69] Setting default-storageclass=true in profile "ha-256000"
	I0718 20:36:54.454543    4727 addons.go:234] Setting addon storage-provisioner=true in "ha-256000"
	I0718 20:36:54.454546    4727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-256000"
	I0718 20:36:54.454554    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.454722    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.455342    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.455486    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 20:36:54.455762    4727 cert_rotation.go:137] Starting client certificate rotation controller
	I0718 20:36:54.455811    4727 addons.go:234] Setting addon default-storageclass=true in "ha-256000"
	I0718 20:36:54.455823    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.460675    4727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 20:36:54.464747    4727 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.464758    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0718 20:36:54.464769    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.465436    4727 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.465440    4727 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0718 20:36:54.465444    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.511774    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.706626    4727 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0718 20:36:54.777305    4727 round_trippers.go:463] GET https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0718 20:36:54.777314    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.777318    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.777321    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.782732    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:36:54.783013    4727 round_trippers.go:463] PUT https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0718 20:36:54.783019    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.783023    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.783026    4727 round_trippers.go:473]     Content-Type: application/json
	I0718 20:36:54.783028    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.784014    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:36:54.792272    4727 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0718 20:36:54.793579    4727 addons.go:510] duration metric: took 339.092083ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0718 20:36:54.793593    4727 start.go:246] waiting for cluster config update ...
	I0718 20:36:54.793600    4727 start.go:255] writing updated cluster config ...
	I0718 20:36:54.798143    4727 out.go:177] 
	I0718 20:36:54.802340    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.802369    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.805206    4727 out.go:177] * Starting "ha-256000-m02" control-plane node in "ha-256000" cluster
	I0718 20:36:54.813295    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:54.813304    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:54.813383    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:54.813389    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:54.813425    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.813828    4727 start.go:360] acquireMachinesLock for ha-256000-m02: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:54.813863    4727 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "ha-256000-m02"
	I0718 20:36:54.813872    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:tr
ue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.813899    4727 start.go:125] createHost starting for "m02" (driver="qemu2")
	I0718 20:36:54.818236    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:54.833731    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:54.833754    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:54.833854    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:54.833891    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833898    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.833936    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:54.833959    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833965    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.834273    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:54.991167    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:55.074302    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:55.074313    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:55.074505    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.084177    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.084198    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.084247    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2 +20000M
	I0718 20:36:55.092640    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:55.092655    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.092668    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.092672    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:55.092685    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:55.092723    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e8:07:38:73:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.131373    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.131397    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.131401    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:55.131414    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:55.131476    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:55.131491    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:55.131496    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:55.131509    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:55.131515    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:55.131521    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:57.132241    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:57.132260    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:57.132370    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:57.132380    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:57.132387    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:57.132391    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:57.132399    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:57.132403    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:59.134429    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:59.134514    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:59.134610    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:59.134633    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:59.134640    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:59.134645    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:59.134650    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:59.134655    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:01.136704    4727 main.go:141] libmachine: Attempt 3
	I0718 20:37:01.136730    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:01.136864    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:01.136874    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:01.136879    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:01.136892    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:01.136897    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:01.136902    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:03.139087    4727 main.go:141] libmachine: Attempt 4
	I0718 20:37:03.139131    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:03.139262    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:03.139278    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:03.139286    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:03.139290    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:03.139295    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:03.139305    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:05.141342    4727 main.go:141] libmachine: Attempt 5
	I0718 20:37:05.141371    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:05.141487    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:05.141499    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:05.141504    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:05.141508    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:05.141513    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:05.141518    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:07.141729    4727 main.go:141] libmachine: Attempt 6
	I0718 20:37:07.141760    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:07.141844    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:07.141853    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:07.141858    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:07.141862    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:07.141866    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:07.141871    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:09.143893    4727 main.go:141] libmachine: Attempt 7
	I0718 20:37:09.143910    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:09.143997    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:37:09.144009    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:37:09.144011    4727 main.go:141] libmachine: Found match: 5a:e8:7:38:73:30
	I0718 20:37:09.144020    4727 main.go:141] libmachine: IP: 192.168.105.6
	I0718 20:37:09.144023    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0718 20:37:22.173394    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:37:22.173460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.173824    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.173832    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:37:22.224366    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:37:22.224379    4727 buildroot.go:166] provisioning hostname "ha-256000-m02"
	I0718 20:37:22.224437    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.224569    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.224574    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m02 && echo "ha-256000-m02" | sudo tee /etc/hostname
	I0718 20:37:22.281136    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m02
	
	I0718 20:37:22.281193    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.281326    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.281333    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:37:22.335405    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:37:22.335420    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:37:22.335427    4727 buildroot.go:174] setting up certificates
	I0718 20:37:22.335432    4727 provision.go:84] configureAuth start
	I0718 20:37:22.335436    4727 provision.go:143] copyHostCerts
	I0718 20:37:22.335460    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335499    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:37:22.335504    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335625    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:37:22.335755    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335793    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:37:22.335798    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335849    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:37:22.335937    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.335958    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:37:22.335961    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.336009    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:37:22.336098    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m02 san=[127.0.0.1 192.168.105.6 ha-256000-m02 localhost minikube]
	I0718 20:37:22.416839    4727 provision.go:177] copyRemoteCerts
	I0718 20:37:22.417292    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:37:22.417307    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:22.446250    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:37:22.446323    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:37:22.455193    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:37:22.455243    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:37:22.463182    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:37:22.463217    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:37:22.471841    4727 provision.go:87] duration metric: took 136.406375ms to configureAuth
	I0718 20:37:22.471860    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:37:22.472154    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:22.472192    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.472306    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.472312    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:37:22.520570    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:37:22.520580    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:37:22.520661    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:37:22.520720    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.520835    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.520884    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:37:22.573905    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:37:22.573954    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.574074    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.574082    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:37:23.946918    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:37:23.946932    4727 machine.go:97] duration metric: took 1.773574458s to provisionDockerMachine
	I0718 20:37:23.946948    4727 client.go:171] duration metric: took 29.113993584s to LocalClient.Create
	I0718 20:37:23.946964    4727 start.go:167] duration metric: took 29.114041166s to libmachine.API.Create "ha-256000"
	I0718 20:37:23.946968    4727 start.go:293] postStartSetup for "ha-256000-m02" (driver="qemu2")
	I0718 20:37:23.946975    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:37:23.947049    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:37:23.947059    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:23.975789    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:37:23.977316    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:37:23.977325    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:37:23.977414    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:37:23.977531    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:37:23.977538    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:37:23.977667    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:37:23.981129    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:23.989836    4727 start.go:296] duration metric: took 42.86225ms for postStartSetup
	I0718 20:37:23.990279    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:37:23.990466    4727 start.go:128] duration metric: took 29.177367125s to createHost
	I0718 20:37:23.990492    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:23.990582    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:23.990587    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:37:24.039991    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360244.056265969
	
	I0718 20:37:24.040003    4727 fix.go:216] guest clock: 1721360244.056265969
	I0718 20:37:24.040011    4727 fix.go:229] Guest: 2024-07-18 20:37:24.056265969 -0700 PDT Remote: 2024-07-18 20:37:23.990469 -0700 PDT m=+76.856635126 (delta=65.796969ms)
	I0718 20:37:24.040021    4727 fix.go:200] guest clock delta is within tolerance: 65.796969ms
	I0718 20:37:24.040027    4727 start.go:83] releasing machines lock for "ha-256000-m02", held for 29.226966s
	I0718 20:37:24.045188    4727 out.go:177] * Found network options:
	I0718 20:37:24.048256    4727 out.go:177]   - NO_PROXY=192.168.105.5
	W0718 20:37:24.052331    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:37:24.052639    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:37:24.052695    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:37:24.052702    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:24.052696    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:37:24.052803    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	W0718 20:37:24.080701    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:37:24.080760    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:37:24.120864    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:37:24.120877    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.120944    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.128913    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:37:24.133095    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:37:24.137320    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.137368    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:37:24.141513    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.145685    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:37:24.149674    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.153524    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:37:24.157504    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:37:24.161442    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:37:24.165217    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:37:24.169715    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:37:24.173504    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:37:24.177428    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.249585    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:37:24.258814    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.258889    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:37:24.266134    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.272789    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:37:24.282701    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.287831    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.293394    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:37:24.332150    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.338444    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.344970    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:37:24.346508    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:37:24.349662    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:37:24.355683    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:37:24.439008    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:37:24.522884    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.522913    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:37:24.529269    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.614408    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:37:26.705797    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.091426708s)
	I0718 20:37:26.705868    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:37:26.711797    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:37:26.719055    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.724747    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:37:26.813533    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:37:26.893596    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:26.965581    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:37:26.972962    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.978785    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:27.061213    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:37:27.087585    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:37:27.087659    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:37:27.091046    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:37:27.091097    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:37:27.092542    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:37:27.112215    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:37:27.112278    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.124950    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.136592    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:37:27.145555    4727 out.go:177]   - env NO_PROXY=192.168.105.5
	I0718 20:37:27.149713    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:37:27.151201    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:27.155414    4727 mustload.go:65] Loading cluster: ha-256000
	I0718 20:37:27.155551    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:27.156066    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:27.156157    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.6
	I0718 20:37:27.156161    4727 certs.go:194] generating shared ca certs ...
	I0718 20:37:27.156167    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.156269    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:37:27.156316    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:37:27.156321    4727 certs.go:256] generating profile certs ...
	I0718 20:37:27.156387    4727 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:37:27.156400    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9
	I0718 20:37:27.156410    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.6 192.168.105.254]
	I0718 20:37:27.328161    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 ...
	I0718 20:37:27.328188    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9: {Name:mkff536dfdabd0cc9a693525dd142a97006d4485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 ...
	I0718 20:37:27.328655    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9: {Name:mkb963d77aed955311589ae3cd9371dca3b50bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328816    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:37:27.328945    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:37:27.329100    4727 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:37:27.329110    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:37:27.329125    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:37:27.329137    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:37:27.329150    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:37:27.329162    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:37:27.329176    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:37:27.329186    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:37:27.329197    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:37:27.329271    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:37:27.329299    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:37:27.329305    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:37:27.329347    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:37:27.329372    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:37:27.329396    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:37:27.329451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:27.329478    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.329491    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.329501    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.329519    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:27.355925    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0718 20:37:27.357647    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0718 20:37:27.362088    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0718 20:37:27.363733    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0718 20:37:27.367759    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0718 20:37:27.369261    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0718 20:37:27.373839    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0718 20:37:27.375475    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0718 20:37:27.379174    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0718 20:37:27.380628    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0718 20:37:27.384809    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0718 20:37:27.386562    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0718 20:37:27.390606    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:37:27.399865    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:37:27.408308    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:37:27.416747    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:37:27.425050    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0718 20:37:27.433244    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:37:27.441306    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:37:27.449446    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:37:27.457566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:37:27.465676    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:37:27.473743    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:37:27.482174    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0718 20:37:27.487947    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0718 20:37:27.493902    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0718 20:37:27.499712    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0718 20:37:27.505265    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0718 20:37:27.511047    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0718 20:37:27.517340    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0718 20:37:27.523229    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:37:27.525438    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:37:27.529080    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530597    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530617    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.532775    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:37:27.536483    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:37:27.540031    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541631    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541649    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.543631    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:37:27.547571    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:37:27.551419    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553057    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553079    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.555162    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:37:27.559227    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:37:27.560725    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:37:27.560754    4727 kubeadm.go:934] updating node {m02 192.168.105.6 8443 v1.30.3 docker true true} ...
	I0718 20:37:27.560799    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:37:27.560814    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:37:27.560837    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:37:27.572539    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:37:27.572577    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:37:27.572623    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.576082    4727 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0718 20:37:27.576121    4727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm
	I0718 20:37:27.579785    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet
	I0718 20:37:34.561853    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.561928    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.564073    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0718 20:37:34.564095    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (49938584 bytes)
	I0718 20:37:35.510887    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.510952    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.512864    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0718 20:37:35.512884    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (48955544 bytes)
	I0718 20:37:42.606961    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:37:42.613080    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.613168    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.614817    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0718 20:37:42.614833    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (96467384 bytes)
	I0718 20:37:43.119287    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0718 20:37:43.122637    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0718 20:37:43.128732    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:37:43.134516    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0718 20:37:43.141275    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:37:43.142606    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:43.146857    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:43.230113    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:37:43.243145    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:43.243333    4727 start.go:317] joinCluster: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluste
rName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:37:43.243382    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0718 20:37:43.243391    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:43.371073    4727 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:37:43.371092    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443"
	I0718 20:38:03.232381    4727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443": (19.861822375s)
	I0718 20:38:03.232396    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0718 20:38:03.485331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000-m02 minikube.k8s.io/updated_at=2024_07_18T20_38_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=false
	I0718 20:38:03.530961    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-256000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0718 20:38:03.578648    4727 start.go:319] duration metric: took 20.3358655s to joinCluster
	I0718 20:38:03.578688    4727 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:03.578898    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:03.583884    4727 out.go:177] * Verifying Kubernetes components...
	I0718 20:38:03.590972    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:03.702999    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:38:03.709797    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:38:03.709929    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0718 20:38:03.709957    4727 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.105.254:8443 with https://192.168.105.5:8443
	I0718 20:38:03.710058    4727 node_ready.go:35] waiting up to 6m0s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:03.710093    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:03.710097    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:03.710101    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:03.710109    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:03.716299    4727 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0718 20:38:04.212157    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.212175    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.212180    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.212182    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.217870    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:38:04.711681    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.711692    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.711696    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.711698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.713463    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.212138    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.212149    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.212153    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.212156    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.214175    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:05.711331    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.711345    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.711360    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.711363    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.712682    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.713155    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:06.210250    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.210264    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.210268    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.210271    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.212254    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:06.711235    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.711255    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.711260    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.711262    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.712940    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.212089    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.212100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.212104    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.212106    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.214317    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:07.712070    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.712079    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.712083    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.712086    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.713825    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.714102    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:08.211862    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.211878    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.211883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.211885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.213993    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:08.712062    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.712075    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.712079    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.712081    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.713753    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.212027    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.212036    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.212052    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.212055    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.213833    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.712020    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.712029    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.712033    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.712035    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.713439    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.212016    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.212025    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.212029    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.212031    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.213662    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.213924    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:10.711085    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.711100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.711114    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.711117    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.712848    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.211980    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.211995    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.211999    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.212002    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.213760    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.711981    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.711994    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.712005    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.712008    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.713435    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.211955    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.211969    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.211974    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.211976    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.213759    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.214202    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:12.711912    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.711929    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.711933    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.711935    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.713382    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.211920    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.211932    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.211941    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.211943    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.213828    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.711194    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.711206    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.711209    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.711211    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.712757    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:14.211901    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.211919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.211924    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.211932    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.213956    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:14.214285    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:14.711860    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.711876    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.711883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.711885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.713170    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.211895    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.211907    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.211911    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.211913    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.213693    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.711835    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.711849    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.711863    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.711865    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.713487    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.211839    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.211844    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.211846    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.213365    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.711659    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.711669    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.711673    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.711675    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.713252    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.713433    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:17.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.211830    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.211834    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.211836    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.213413    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:17.711756    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.711781    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.711785    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.711788    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.713341    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.211779    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.211794    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.211798    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.211800    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.213551    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.711749    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.711759    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.711764    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.711766    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.713325    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.713645    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:19.211738    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.211750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.211754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.211756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.213507    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:19.711717    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.711731    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.711734    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.711736    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.713476    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.211230    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.211271    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.211314    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.211318    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.212922    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.710773    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.710783    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.710787    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.710790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.712163    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.211705    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.211717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.211738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.211742    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.213362    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.213898    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:21.711683    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.711698    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.711702    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.711704    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.713411    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.211928    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.211938    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.211942    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.211944    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.214292    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.214473    4727 node_ready.go:49] node "ha-256000-m02" has status "Ready":"True"
	I0718 20:38:22.214479    4727 node_ready.go:38] duration metric: took 18.50492425s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:22.214483    4727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:22.214513    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:22.214523    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.214528    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.214533    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.216823    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.221656    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.221688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gl7wn
	I0718 20:38:22.221691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.221695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.221698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.223037    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.223438    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.223443    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.223447    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.223449    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.224627    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.224906    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.224912    4727 pod_ready.go:81] duration metric: took 3.247917ms for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224916    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224935    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t5fk7
	I0718 20:38:22.224937    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.224950    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.224954    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.226106    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.226400    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.226404    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.226411    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.226414    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.227526    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.227886    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.227891    4727 pod_ready.go:81] duration metric: took 2.972458ms for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227894    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227913    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000
	I0718 20:38:22.227919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.227923    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.227925    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.228991    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.229395    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.229399    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.229402    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.229406    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.230465    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.230693    4727 pod_ready.go:92] pod "etcd-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.230699    4727 pod_ready.go:81] duration metric: took 2.801916ms for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230703    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230720    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000-m02
	I0718 20:38:22.230723    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.230726    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.230728    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.231834    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.232263    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.232268    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.232271    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.232273    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.233360    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.233783    4727 pod_ready.go:92] pod "etcd-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.233789    4727 pod_ready.go:81] duration metric: took 3.083416ms for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.233794    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.413762    4727 request.go:629] Waited for 179.941666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413824    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413828    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.413841    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.413846    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.415462    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.613785    4727 request.go:629] Waited for 197.877917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613838    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613844    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.613847    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.613849    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.616581    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.616806    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.616814    4727 pod_ready.go:81] duration metric: took 383.02725ms for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.616819    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.813743    4727 request.go:629] Waited for 196.894708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813781    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813784    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.813788    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.813790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.815511    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.012375    4727 request.go:629] Waited for 196.496584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012418    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012422    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.012426    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.012428    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.014100    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.014297    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.014304    4727 pod_ready.go:81] duration metric: took 397.4915ms for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.014308    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.213728    4727 request.go:629] Waited for 199.392916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213764    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213767    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.213771    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.213774    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.215292    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.412016    4727 request.go:629] Waited for 196.230667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412048    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412050    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.412055    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.412057    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.414117    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.414317    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.414324    4727 pod_ready.go:81] duration metric: took 400.022917ms for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.414329    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.613726    4727 request.go:629] Waited for 199.367083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613754    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613757    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.613760    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.613763    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.615829    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.813718    4727 request.go:629] Waited for 197.566667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813747    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.813754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.813756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.815391    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.815670    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.815679    4727 pod_ready.go:81] duration metric: took 401.357791ms for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.815685    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.013744    4727 request.go:629] Waited for 198.028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013777    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013780    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.013783    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.013785    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.015358    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.213717    4727 request.go:629] Waited for 197.87625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213750    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213772    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.213776    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.213779    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.215177    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.215486    4727 pod_ready.go:92] pod "kube-proxy-99sn4" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.215494    4727 pod_ready.go:81] duration metric: took 399.816291ms for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.215499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.412543    4727 request.go:629] Waited for 197.022333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412572    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412576    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.412580    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.412582    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.414200    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.613688    4727 request.go:629] Waited for 199.188292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613723    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613734    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.613738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.613740    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.616115    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:24.616487    4727 pod_ready.go:92] pod "kube-proxy-jxnv9" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.616495    4727 pod_ready.go:81] duration metric: took 401.003958ms for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.616499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.811999    4727 request.go:629] Waited for 195.4745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812037    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812040    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.812044    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.812046    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.813599    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.013712    4727 request.go:629] Waited for 199.880375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013743    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013746    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.013750    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.013752    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.015408    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.015677    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.015685    4727 pod_ready.go:81] duration metric: took 399.1935ms for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.015689    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.213690    4727 request.go:629] Waited for 197.964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213729    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213735    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.213739    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.213741    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.215582    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.413674    4727 request.go:629] Waited for 197.841584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413700    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413702    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.413714    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.413717    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.415433    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.415627    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.415633    4727 pod_ready.go:81] duration metric: took 399.951542ms for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.415638    4727 pod_ready.go:38] duration metric: took 3.201238458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:25.415647    4727 api_server.go:52] waiting for apiserver process to appear ...
	I0718 20:38:25.415719    4727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:38:25.421413    4727 api_server.go:72] duration metric: took 21.843316333s to wait for apiserver process to appear ...
	I0718 20:38:25.421422    4727 api_server.go:88] waiting for apiserver healthz status ...
	I0718 20:38:25.421429    4727 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0718 20:38:25.424174    4727 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0718 20:38:25.424198    4727 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0718 20:38:25.424200    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.424204    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.424207    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.424682    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:38:25.424723    4727 api_server.go:141] control plane version: v1.30.3
	I0718 20:38:25.424729    4727 api_server.go:131] duration metric: took 3.305084ms to wait for apiserver health ...
	I0718 20:38:25.424732    4727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0718 20:38:25.613673    4727 request.go:629] Waited for 188.916583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613714    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.613721    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.613723    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.616608    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:25.620463    4727 system_pods.go:59] 17 kube-system pods found
	I0718 20:38:25.620472    4727 system_pods.go:61] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:25.620475    4727 system_pods.go:61] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:25.620477    4727 system_pods.go:61] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:25.620479    4727 system_pods.go:61] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:25.620480    4727 system_pods.go:61] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:25.620482    4727 system_pods.go:61] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:25.620484    4727 system_pods.go:61] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:25.620486    4727 system_pods.go:61] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:25.620488    4727 system_pods.go:61] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:25.620490    4727 system_pods.go:61] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:25.620492    4727 system_pods.go:61] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:25.620493    4727 system_pods.go:61] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:25.620495    4727 system_pods.go:61] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:25.620497    4727 system_pods.go:61] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:25.620498    4727 system_pods.go:61] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:25.620500    4727 system_pods.go:61] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:25.620502    4727 system_pods.go:61] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:25.620505    4727 system_pods.go:74] duration metric: took 195.775375ms to wait for pod list to return data ...
	I0718 20:38:25.620509    4727 default_sa.go:34] waiting for default service account to be created ...
	I0718 20:38:25.813683    4727 request.go:629] Waited for 193.137584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813709    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813712    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.813716    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.813721    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.815354    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.815466    4727 default_sa.go:45] found service account: "default"
	I0718 20:38:25.815474    4727 default_sa.go:55] duration metric: took 194.966875ms for default service account to be created ...
	I0718 20:38:25.815479    4727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0718 20:38:26.013652    4727 request.go:629] Waited for 198.147166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.013695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.013702    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.016448    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:26.020596    4727 system_pods.go:86] 17 kube-system pods found
	I0718 20:38:26.020604    4727 system_pods.go:89] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:26.020607    4727 system_pods.go:89] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:26.020609    4727 system_pods.go:89] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:26.020611    4727 system_pods.go:89] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:26.020613    4727 system_pods.go:89] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:26.020615    4727 system_pods.go:89] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:26.020617    4727 system_pods.go:89] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:26.020619    4727 system_pods.go:89] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:26.020621    4727 system_pods.go:89] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:26.020622    4727 system_pods.go:89] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:26.020624    4727 system_pods.go:89] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:26.020626    4727 system_pods.go:89] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:26.020628    4727 system_pods.go:89] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:26.020629    4727 system_pods.go:89] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:26.020631    4727 system_pods.go:89] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:26.020633    4727 system_pods.go:89] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:26.020635    4727 system_pods.go:89] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:26.020641    4727 system_pods.go:126] duration metric: took 205.165291ms to wait for k8s-apps to be running ...
	I0718 20:38:26.020645    4727 system_svc.go:44] waiting for kubelet service to be running ....
	I0718 20:38:26.020720    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:38:26.027026    4727 system_svc.go:56] duration metric: took 6.37875ms WaitForService to wait for kubelet
	I0718 20:38:26.027036    4727 kubeadm.go:582] duration metric: took 22.448955791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:38:26.027047    4727 node_conditions.go:102] verifying NodePressure condition ...
	I0718 20:38:26.213670    4727 request.go:629] Waited for 186.592667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213748    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213751    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.213756    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.213758    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.215369    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:26.215702    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215710    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215716    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215719    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215721    4727 node_conditions.go:105] duration metric: took 188.677125ms to run NodePressure ...
	I0718 20:38:26.215733    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:38:26.215747    4727 start.go:255] writing updated cluster config ...
	I0718 20:38:26.221138    4727 out.go:177] 
	I0718 20:38:26.225195    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:26.225251    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.230070    4727 out.go:177] * Starting "ha-256000-m03" control-plane node in "ha-256000" cluster
	I0718 20:38:26.238085    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:38:26.238092    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:38:26.238177    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:38:26.238184    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:38:26.238226    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.238529    4727 start.go:360] acquireMachinesLock for ha-256000-m03: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:38:26.238563    4727 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "ha-256000-m03"
	I0718 20:38:26.238573    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:26.238613    4727 start.go:125] createHost starting for "m03" (driver="qemu2")
	I0718 20:38:26.243026    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:38:26.268172    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:38:26.268206    4727 client.go:168] LocalClient.Create starting
	I0718 20:38:26.268290    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:38:26.268328    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268338    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268376    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:38:26.268399    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268406    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268691    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:38:26.426584    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:38:26.572781    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:38:26.572789    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:38:26.573022    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.588299    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.588321    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.588408    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2 +20000M
	I0718 20:38:26.597072    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:38:26.597089    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.597102    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.597113    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:38:26.597129    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:38:26.597163    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:7f:0e:0c:6d:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.641473    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.641500    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.641504    4727 main.go:141] libmachine: Attempt 0
	I0718 20:38:26.641520    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:26.641735    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:26.641749    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:26.641756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:26.641761    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:26.641765    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:26.641770    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:26.641776    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:28.643878    4727 main.go:141] libmachine: Attempt 1
	I0718 20:38:28.643913    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:28.644011    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:28.644023    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:28.644028    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:28.644032    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:28.644036    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:28.644046    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:28.644052    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:30.646081    4727 main.go:141] libmachine: Attempt 2
	I0718 20:38:30.646120    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:30.646235    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:30.646244    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:30.646250    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:30.646254    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:30.646258    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:30.646262    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:30.646267    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:32.648349    4727 main.go:141] libmachine: Attempt 3
	I0718 20:38:32.648374    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:32.648466    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:32.648477    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:32.648481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:32.648486    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:32.648497    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:32.648501    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:32.648514    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:34.650548    4727 main.go:141] libmachine: Attempt 4
	I0718 20:38:34.650566    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:34.650664    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:34.650674    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:34.650678    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:34.650682    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:34.650686    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:34.650692    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:34.650696    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:36.652758    4727 main.go:141] libmachine: Attempt 5
	I0718 20:38:36.652796    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:36.652971    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:36.652995    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:36.653008    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:36.653088    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:36.653108    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:36.653113    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:36.653119    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:38.654089    4727 main.go:141] libmachine: Attempt 6
	I0718 20:38:38.654205    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:38.654304    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:38.654315    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:38.654320    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:38.654329    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:38.654333    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:38.654338    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:38.654343    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:40.656398    4727 main.go:141] libmachine: Attempt 7
	I0718 20:38:40.656425    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:40.656535    4727 main.go:141] libmachine: Found 7 entries in /var/db/dhcpd_leases!
	I0718 20:38:40.656552    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:d2:7f:e:c:6d:ba ID:1,d2:7f:e:c:6d:ba Lease:0x669b313f}
	I0718 20:38:40.656554    4727 main.go:141] libmachine: Found match: d2:7f:e:c:6d:ba
	I0718 20:38:40.656561    4727 main.go:141] libmachine: IP: 192.168.105.7
	I0718 20:38:40.656567    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.7)...
	I0718 20:38:49.679874    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:38:49.680098    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.680386    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.680393    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:38:49.720341    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:38:49.720352    4727 buildroot.go:166] provisioning hostname "ha-256000-m03"
	I0718 20:38:49.720396    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.720501    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.720507    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m03 && echo "ha-256000-m03" | sudo tee /etc/hostname
	I0718 20:38:49.765619    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m03
	
	I0718 20:38:49.765691    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.765821    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.765830    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:38:49.809445    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:38:49.809457    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:38:49.809463    4727 buildroot.go:174] setting up certificates
	I0718 20:38:49.809467    4727 provision.go:84] configureAuth start
	I0718 20:38:49.809471    4727 provision.go:143] copyHostCerts
	I0718 20:38:49.809497    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809560    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:38:49.809567    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809680    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:38:49.810515    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810551    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:38:49.810554    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810618    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:38:49.810856    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810884    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:38:49.810888    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810942    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:38:49.811128    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m03 san=[127.0.0.1 192.168.105.7 ha-256000-m03 localhost minikube]
	I0718 20:38:49.892392    4727 provision.go:177] copyRemoteCerts
	I0718 20:38:49.892426    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:38:49.892435    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:49.917004    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:38:49.917069    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:38:49.925760    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:38:49.925809    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:38:49.934495    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:38:49.934547    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:38:49.944465    4727 provision.go:87] duration metric: took 134.994083ms to configureAuth
	I0718 20:38:49.944477    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:38:49.946418    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:49.946460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.946554    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.946559    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:38:49.988863    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:38:49.988874    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:38:49.988957    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:38:49.989005    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.989117    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.989151    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	Environment="NO_PROXY=192.168.105.5,192.168.105.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:38:50.033434    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	Environment=NO_PROXY=192.168.105.5,192.168.105.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:38:50.033494    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:50.033609    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:50.033618    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:38:51.357934    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:38:51.357948    4727 machine.go:97] duration metric: took 1.678110291s to provisionDockerMachine
	I0718 20:38:51.357955    4727 client.go:171] duration metric: took 25.090436s to LocalClient.Create
	I0718 20:38:51.357970    4727 start.go:167] duration metric: took 25.090492834s to libmachine.API.Create "ha-256000"
	I0718 20:38:51.357987    4727 start.go:293] postStartSetup for "ha-256000-m03" (driver="qemu2")
	I0718 20:38:51.357993    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:38:51.358064    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:38:51.358075    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.383362    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:38:51.385220    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:38:51.385229    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:38:51.385339    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:38:51.385460    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:38:51.385466    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:38:51.385589    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:38:51.389076    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:38:51.397667    4727 start.go:296] duration metric: took 39.676333ms for postStartSetup
	I0718 20:38:51.398148    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:51.398353    4727 start.go:128] duration metric: took 25.1604295s to createHost
	I0718 20:38:51.398381    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:51.398475    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:51.398479    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:38:51.443684    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360331.726119547
	
	I0718 20:38:51.443697    4727 fix.go:216] guest clock: 1721360331.726119547
	I0718 20:38:51.443701    4727 fix.go:229] Guest: 2024-07-18 20:38:51.726119547 -0700 PDT Remote: 2024-07-18 20:38:51.39836 -0700 PDT m=+164.266937085 (delta=327.759547ms)
	I0718 20:38:51.443713    4727 fix.go:200] guest clock delta is within tolerance: 327.759547ms
	I0718 20:38:51.443716    4727 start.go:83] releasing machines lock for "ha-256000-m03", held for 25.205843709s
	I0718 20:38:51.447883    4727 out.go:177] * Found network options:
	I0718 20:38:51.451892    4727 out.go:177]   - NO_PROXY=192.168.105.5,192.168.105.6
	W0718 20:38:51.455815    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.455829    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456208    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456223    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:38:51.456298    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:38:51.456327    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	W0718 20:38:51.479804    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:38:51.479862    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:38:51.524774    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:38:51.524786    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.524847    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.531855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:38:51.535855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:38:51.539545    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.539580    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:38:51.543520    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.547437    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:38:51.551284    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.555870    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:38:51.559926    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:38:51.563772    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:38:51.567972    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:38:51.572324    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:38:51.576791    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:38:51.580307    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.641726    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:38:51.654538    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.654606    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:38:51.661500    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.671940    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:38:51.683005    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.689286    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.694846    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:38:51.739658    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.745604    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.752465    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:38:51.754039    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:38:51.757754    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:38:51.764400    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:38:51.833658    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:38:51.901993    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.902021    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:38:51.910153    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.983567    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:39:53.221259    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.239360917s)
	I0718 20:39:53.221338    4727 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0718 20:39:53.233907    4727 out.go:177] 
	W0718 20:39:53.237861    4727 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:38:50 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531478880Z" level=info msg="Starting up"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531868672Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.532448547Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=532
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.550167964Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560007672Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560035005Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560063505Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560074839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560111130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560123547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560217922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560230922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560237130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560241589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560270464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560366505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561097130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561114380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561185047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561197839Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561245172Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561280130Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563923422Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563946005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563952880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563959547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563972505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564012380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564132589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564175464Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564185714Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564191797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564197839Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564204005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564210464Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564216297Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564222297Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564228089Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564233922Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564239422Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564256255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564264589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564270589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564276339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564281380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564287547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564292755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564298214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564303922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564310047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564315047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564320255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564325630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564332547Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564341589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564346797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564352089Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564402380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564416755Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564421630Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564427380Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564432047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564437755Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564467089Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564611964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564632964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564646839Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564655005Z" level=info msg="containerd successfully booted in 0.014823s"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.553636672Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.561497047Z" level=info msg="Loading containers: start."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.589775631Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.620757631Z" level=info msg="Loading containers: done."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624562881Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624599339Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:38:51 ha-256000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641454297Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641495839Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:38:52 ha-256000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.265389656Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266153693Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266192011Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266216137Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266284865Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:53 ha-256000-m03 dockerd[931]: time="2024-07-19T03:38:53.282812481Z" level=info msg="Starting up"
	Jul 19 03:39:53 ha-256000-m03 dockerd[931]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0718 20:39:53.237915    4727 out.go:239] * 
	W0718 20:39:53.239556    4727 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 20:39:53.244752    4727 out.go:177] 
	
	
	==> Docker <==
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62c92a2e03424d74abec35244521f1b7761982d7dbb7311513fb13f822c225ed/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f20cc01dd922b82b1ee5c6472024624755b1340ebceab21cf25c6eacf6e19c4/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5db9ae745b118ebe428663f3f1c8c679cdc1a26cea72ee6016f951ae34fc28ea/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858940540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858976718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858984229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.859018904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.861914444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.861992224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.862003156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.862051518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889214398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889287171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889293388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889346507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061800448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061853702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061875454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061930291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:39:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a81719e2049682e90e011b40424dd53e2ae913d00000287c821ac163206c9b20/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 03:39:56 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:39:56Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404399110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404453937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404462477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404689325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf6fa4236c452       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   a81719e204968       busybox-fc5497c4f-5922h
	6dfd469e7d36e       ba04bb24b9575                                                                                         14 minutes ago      Running             storage-provisioner       0                   5db9ae745b118       storage-provisioner
	1097379f4f6cb       2437cf7621777                                                                                         14 minutes ago      Running             coredns                   0                   62c92a2e03424       coredns-7db6d8ff4d-gl7wn
	9a1c088f8966e       2437cf7621777                                                                                         14 minutes ago      Running             coredns                   0                   5f20cc01dd922       coredns-7db6d8ff4d-t5fk7
	74fc7ee221313       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              14 minutes ago      Running             kindnet-cni               0                   f7fb0ae46c979       kindnet-znvgn
	9103cd3e30ac5       2351f570ed0ea                                                                                         14 minutes ago      Running             kube-proxy                0                   dd4c5c6f3ce08       kube-proxy-jxnv9
	8128016ed9c34       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   e405a8655e904       kube-vip-ha-256000
	d5ff116ccff16       014faa467e297                                                                                         15 minutes ago      Running             etcd                      0                   1dd441769aa2a       etcd-ha-256000
	29f96bba40d3a       d48f992a22722                                                                                         15 minutes ago      Running             kube-scheduler            0                   aa59c4a58dba5       kube-scheduler-ha-256000
	70ffd55232c0b       8e97cdb19e7cc                                                                                         15 minutes ago      Running             kube-controller-manager   0                   96446dab38e98       kube-controller-manager-ha-256000
	dff4e67b66806       61773190d42ff                                                                                         15 minutes ago      Running             kube-apiserver            0                   877c87b7df476       kube-apiserver-ha-256000
	
	
	==> coredns [1097379f4f6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37765 - 42644 "HINFO IN 3312804127670044151.9315725327003923. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.009474143s
	[INFO] 10.244.0.4:33989 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.044131336s
	[INFO] 10.244.0.4:49979 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001205888s
	[INFO] 10.244.1.2:54862 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000064045s
	[INFO] 10.244.0.4:54057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097379s
	[INFO] 10.244.0.4:39996 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065545s
	[INFO] 10.244.0.4:39732 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063878s
	[INFO] 10.244.1.2:57277 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070961s
	[INFO] 10.244.1.2:44544 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00059536s
	[INFO] 10.244.1.2:33879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000042043s
	[INFO] 10.244.1.2:41170 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039002s
	[INFO] 10.244.0.4:32818 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000023751s
	[INFO] 10.244.0.4:44658 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000027251s
	[INFO] 10.244.1.2:36566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093796s
	[INFO] 10.244.1.2:41685 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000035752s
	[INFO] 10.244.1.2:36603 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000019667s
	[INFO] 10.244.0.4:51415 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000060336s
	[INFO] 10.244.0.4:50758 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000047377s
	[INFO] 10.244.1.2:56872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077712s
	[INFO] 10.244.1.2:34308 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000047752s
	[INFO] 10.244.1.2:48345 - 5 "PTR IN 1.105.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000043752s
	
	
	==> coredns [9a1c088f8966] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42392 - 40278 "HINFO IN 2632545797447059373.9195703630793318012. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009665964s
	[INFO] 10.244.0.4:39096 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234719s
	[INFO] 10.244.0.4:39212 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010352553s
	[INFO] 10.244.1.2:39974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082254s
	[INFO] 10.244.1.2:48244 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00062732s
	[INFO] 10.244.1.2:44600 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000022126s
	[INFO] 10.244.0.4:43528 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001761788s
	[INFO] 10.244.0.4:39922 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072504s
	[INFO] 10.244.0.4:40557 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054253s
	[INFO] 10.244.0.4:36599 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000831538s
	[INFO] 10.244.0.4:35378 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072337s
	[INFO] 10.244.1.2:45376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082296s
	[INFO] 10.244.1.2:55926 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000027209s
	[INFO] 10.244.1.2:50938 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000031001s
	[INFO] 10.244.1.2:32874 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004696s
	[INFO] 10.244.0.4:39411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067337s
	[INFO] 10.244.0.4:56069 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000028543s
	[INFO] 10.244.1.2:60061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076628s
	[INFO] 10.244.0.4:57199 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087171s
	[INFO] 10.244.0.4:55865 - 5 "PTR IN 1.105.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000063753s
	[INFO] 10.244.1.2:50952 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059502s
	
	
	==> describe nodes <==
	Name:               ha-256000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_18T20_36_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:36:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:51:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:37:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    ha-256000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d710ce1e1896426084c421362e18dda0
	  System UUID:                d710ce1e1896426084c421362e18dda0
	  Boot ID:                    83486cc1-e7b0-4568-bb5a-c46474de14e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5922h              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-gl7wn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-t5fk7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-256000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-znvgn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-256000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-256000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-jxnv9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-256000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-256000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node ha-256000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node ha-256000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node ha-256000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node ha-256000 event: Registered Node ha-256000 in Controller
	  Normal  NodeReady                14m   kubelet          Node ha-256000 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node ha-256000 event: Registered Node ha-256000 in Controller
	
	
	Name:               ha-256000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_18T20_38_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:38:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:51:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ha-256000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 b10ac96f2bdf4ee3ad1f9ba82eb39a4e
	  System UUID:                b10ac96f2bdf4ee3ad1f9ba82eb39a4e
	  Boot ID:                    b548924b-9c86-4ba2-9a9e-2e5cc7830327
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bqdhb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-256000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-2mvfm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-256000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-256000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-99sn4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-256000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-256000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-256000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-256000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-256000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-256000-m02 event: Registered Node ha-256000-m02 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-256000-m02 event: Registered Node ha-256000-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.650707] EINJ: EINJ table not found.
	[  +0.549800] systemd-fstab-generator[117]: Ignoring "noauto" option for root device
	[  +0.136927] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000360] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db
	[  +3.624626] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +0.080461] systemd-fstab-generator[508]: Ignoring "noauto" option for root device
	[  +0.034842] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.469016] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.194273] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.081032] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.086446] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +2.293076] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.088824] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.085311] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.095642] systemd-fstab-generator[1171]: Ignoring "noauto" option for root device
	[  +2.542348] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.036994] kauditd_printk_skb: 257 callbacks suppressed
	[  +2.330914] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +2.194691] systemd-fstab-generator[1695]: Ignoring "noauto" option for root device
	[  +0.779104] kauditd_printk_skb: 104 callbacks suppressed
	[  +3.727432] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[ +15.155229] kauditd_printk_skb: 62 callbacks suppressed
	[Jul19 03:37] kauditd_printk_skb: 29 callbacks suppressed
	[Jul19 03:38] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d5ff116ccff1] <==
	{"level":"info","ts":"2024-07-19T03:38:02.849603Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"58de0efec1d86300","remote-peer-id":"dcb4f5dcb4017fbf"}
	{"level":"info","ts":"2024-07-19T03:38:02.851115Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"58de0efec1d86300","to":"dcb4f5dcb4017fbf","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-19T03:38:02.851146Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"58de0efec1d86300","remote-peer-id":"dcb4f5dcb4017fbf"}
	{"level":"info","ts":"2024-07-19T03:38:03.239361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856 15903606512413671359)"}
	{"level":"info","ts":"2024-07-19T03:38:03.239499Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300"}
	{"level":"info","ts":"2024-07-19T03:38:03.239512Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"58de0efec1d86300","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"dcb4f5dcb4017fbf"}
	{"level":"warn","ts":"2024-07-19T03:38:38.860449Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":7133861002988229904,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-07-19T03:38:39.213772Z","caller":"traceutil/trace.go:171","msg":"trace[213955580] linearizableReadLoop","detail":"{readStateIndex:773; appliedIndex:773; }","duration":"854.090297ms","start":"2024-07-19T03:38:38.359661Z","end":"2024-07-19T03:38:39.213752Z","steps":["trace[213955580] 'read index received'  (duration: 854.085672ms)","trace[213955580] 'applied index is now lower than readState.Index'  (duration: 1.458µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T03:38:39.214653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"854.964275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.105.5\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-19T03:38:39.214668Z","caller":"traceutil/trace.go:171","msg":"trace[64905690] range","detail":"{range_begin:/registry/masterleases/192.168.105.5; range_end:; response_count:1; response_revision:726; }","duration":"855.016063ms","start":"2024-07-19T03:38:38.359648Z","end":"2024-07-19T03:38:39.214664Z","steps":["trace[64905690] 'agreement among raft nodes before linearized reading'  (duration: 854.846409ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.214698Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.359622Z","time spent":"855.063476ms","remote":"127.0.0.1:50924","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":156,"request content":"key:\"/registry/masterleases/192.168.105.5\" "}
	{"level":"warn","ts":"2024-07-19T03:38:39.217551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.784693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:38:39.217629Z","caller":"traceutil/trace.go:171","msg":"trace[485073674] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:726; }","duration":"181.858104ms","start":"2024-07-19T03:38:39.035755Z","end":"2024-07-19T03:38:39.217613Z","steps":["trace[485073674] 'agreement among raft nodes before linearized reading'  (duration: 181.775735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.218131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.961025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-07-19T03:38:39.218206Z","caller":"traceutil/trace.go:171","msg":"trace[1437088211] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:726; }","duration":"362.976608ms","start":"2024-07-19T03:38:38.855164Z","end":"2024-07-19T03:38:39.218141Z","steps":["trace[1437088211] 'agreement among raft nodes before linearized reading'  (duration: 362.940194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.218228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.855138Z","time spent":"363.085141ms","remote":"127.0.0.1:51114","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":457,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" "}
	{"level":"warn","ts":"2024-07-19T03:38:39.219731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.350481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:38:39.21976Z","caller":"traceutil/trace.go:171","msg":"trace[1532987535] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:726; }","duration":"513.381938ms","start":"2024-07-19T03:38:38.706374Z","end":"2024-07-19T03:38:39.219756Z","steps":["trace[1532987535] 'agreement among raft nodes before linearized reading'  (duration: 509.325689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.219771Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.706284Z","time spent":"513.484013ms","remote":"127.0.0.1:50868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-19T03:46:36.540686Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1175}
	{"level":"info","ts":"2024-07-19T03:46:36.562489Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1175,"took":"20.474469ms","hash":3930648337,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1482752,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-19T03:46:36.562693Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3930648337,"revision":1175,"compact-revision":-1}
	{"level":"info","ts":"2024-07-19T03:51:36.54679Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1806}
	{"level":"info","ts":"2024-07-19T03:51:36.56014Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1806,"took":"13.081219ms","hash":2540466080,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1347584,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2024-07-19T03:51:36.560169Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2540466080,"revision":1806,"compact-revision":1175}
	
	
	==> kernel <==
	 03:51:38 up 15 min,  0 users,  load average: 0.05, 0.11, 0.09
	Linux ha-256000 5.10.207 #1 SMP PREEMPT Thu Jul 18 19:24:21 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [74fc7ee22131] <==
	I0719 03:50:29.210276       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:50:39.212130       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:50:39.212151       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:50:39.212286       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:50:39.212297       1 main.go:303] handling current node
	I0719 03:50:49.215283       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:50:49.215305       1 main.go:303] handling current node
	I0719 03:50:49.215314       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:50:49.215317       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:50:59.210139       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:50:59.210159       1 main.go:303] handling current node
	I0719 03:50:59.210171       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:50:59.210174       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:51:09.209774       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:51:09.209793       1 main.go:303] handling current node
	I0719 03:51:09.209802       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:51:09.209805       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:51:19.211720       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:51:19.211738       1 main.go:303] handling current node
	I0719 03:51:19.211748       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:51:19.211751       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:51:29.209881       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:51:29.209911       1 main.go:303] handling current node
	I0719 03:51:29.209921       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:51:29.209924       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [dff4e67b6680] <==
	I0719 03:36:38.302580       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 03:36:38.313862       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 03:36:38.355728       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0719 03:36:38.357891       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0719 03:36:38.358258       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 03:36:38.359450       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 03:36:39.162576       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 03:36:39.259455       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 03:36:39.263308       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 03:36:39.266876       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 03:36:53.692820       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 03:36:53.723447       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0719 03:38:39.230077       1 trace.go:236] Trace[99535700]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.105.5,type:*v1.Endpoints,resource:apiServerIPInfo (19-Jul-2024 03:38:38.359) (total time: 870ms):
	Trace[99535700]: ---"initial value restored" 856ms (03:38:39.216)
	Trace[99535700]: [870.770259ms] [870.770259ms] END
	E0719 03:51:35.729254       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50022: use of closed network connection
	E0719 03:51:35.841233       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50024: use of closed network connection
	E0719 03:51:36.030728       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50029: use of closed network connection
	E0719 03:51:36.142429       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50031: use of closed network connection
	E0719 03:51:36.323525       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50036: use of closed network connection
	E0719 03:51:36.429306       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50038: use of closed network connection
	E0719 03:51:37.668910       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50053: use of closed network connection
	E0719 03:51:37.774366       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50055: use of closed network connection
	E0719 03:51:37.880279       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50057: use of closed network connection
	E0719 03:51:37.986190       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50059: use of closed network connection
	
	
	==> kube-controller-manager [70ffd55232c0] <==
	I0719 03:36:54.412561       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 03:36:54.412576       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 03:37:22.400084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.747µs"
	I0719 03:37:22.402636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.4µs"
	I0719 03:37:22.408319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.037µs"
	I0719 03:37:22.415741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.491µs"
	I0719 03:37:23.262808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="33.25µs"
	I0719 03:37:23.279353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.239521ms"
	I0719 03:37:23.279510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.085µs"
	I0719 03:37:23.294158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.586299ms"
	I0719 03:37:23.294186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.391µs"
	I0719 03:37:23.772649       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0719 03:38:01.950412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-256000-m02\" does not exist"
	I0719 03:38:01.956739       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-256000-m02" podCIDRs=["10.244.1.0/24"]
	I0719 03:38:03.779798       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-256000-m02"
	I0719 03:39:54.715082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.549011ms"
	I0719 03:39:54.728524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.544471ms"
	I0719 03:39:54.760521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.962639ms"
	I0719 03:39:54.798120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.556155ms"
	I0719 03:39:54.810232       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.068766ms"
	I0719 03:39:54.810338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.794µs"
	I0719 03:39:56.791240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.855498ms"
	I0719 03:39:56.791390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.29µs"
	I0719 03:39:57.235525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.740732ms"
	I0719 03:39:57.236806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.25502ms"
	
	
	==> kube-proxy [9103cd3e30ac] <==
	I0719 03:36:54.228395       1 server_linux.go:69] "Using iptables proxy"
	I0719 03:36:54.235224       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.5"]
	I0719 03:36:54.286000       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 03:36:54.286028       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 03:36:54.286039       1 server_linux.go:165] "Using iptables Proxier"
	I0719 03:36:54.287034       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 03:36:54.287396       1 server.go:872] "Version info" version="v1.30.3"
	I0719 03:36:54.287403       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:36:54.288184       1 config.go:192] "Starting service config controller"
	I0719 03:36:54.288259       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 03:36:54.288280       1 config.go:319] "Starting node config controller"
	I0719 03:36:54.288282       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 03:36:54.289304       1 config.go:101] "Starting endpoint slice config controller"
	I0719 03:36:54.289308       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 03:36:54.388688       1 shared_informer.go:320] Caches are synced for node config
	I0719 03:36:54.388711       1 shared_informer.go:320] Caches are synced for service config
	I0719 03:36:54.389972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [29f96bba40d3] <==
	W0719 03:36:37.216385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:37.216388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:37.216419       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 03:36:37.216424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 03:36:37.216440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 03:36:37.216444       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 03:36:37.216461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 03:36:37.216464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 03:36:37.216476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 03:36:37.216491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 03:36:37.216504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:37.216507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.043369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 03:36:38.043491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 03:36:38.078796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:38.078841       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.135286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:38.135302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.143595       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 03:36:38.143607       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 03:36:40.612937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 03:39:54.727744       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5922h\": pod busybox-fc5497c4f-5922h is already assigned to node \"ha-256000\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-5922h" node="ha-256000"
	E0719 03:39:54.727817       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1bb5b7eb-c669-43f7-ac3f-753596620b94(default/busybox-fc5497c4f-5922h) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-5922h"
	E0719 03:39:54.727832       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5922h\": pod busybox-fc5497c4f-5922h is already assigned to node \"ha-256000\"" pod="default/busybox-fc5497c4f-5922h"
	I0719 03:39:54.727844       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-5922h" node="ha-256000"
	
	
	==> kubelet <==
	Jul 19 03:46:39 ha-256000 kubelet[2215]: E0719 03:46:39.080023    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:46:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:46:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:46:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:46:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:47:39 ha-256000 kubelet[2215]: E0719 03:47:39.079617    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:47:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:48:39 ha-256000 kubelet[2215]: E0719 03:48:39.080370    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:48:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:49:39 ha-256000 kubelet[2215]: E0719 03:49:39.079647    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:49:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:50:39 ha-256000 kubelet[2215]: E0719 03:50:39.079658    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:50:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ha-256000 -n ha-256000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-256000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-hkhd4
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-256000 describe pod busybox-fc5497c4f-hkhd4
helpers_test.go:282: (dbg) kubectl --context ha-256000 describe pod busybox-fc5497c4f-hkhd4:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-hkhd4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f6vj6 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-f6vj6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  11m                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  94s (x2 over 6m34s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  87s (x3 over 11m)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-256000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-256000 -v=7 --alsologtostderr: (49.906848917s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-256000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-256000 status -v=7 --alsologtostderr: exit status 2 (211.186333ms)

                                                
                                                
-- stdout --
	ha-256000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-256000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-256000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-256000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:52:28.789273    5178 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:52:28.789541    5178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:52:28.789546    5178 out.go:304] Setting ErrFile to fd 2...
	I0718 20:52:28.789549    5178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:52:28.789685    5178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:52:28.789898    5178 out.go:298] Setting JSON to false
	I0718 20:52:28.789911    5178 mustload.go:65] Loading cluster: ha-256000
	I0718 20:52:28.789956    5178 notify.go:220] Checking for updates...
	I0718 20:52:28.790128    5178 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:52:28.790136    5178 status.go:255] checking status of ha-256000 ...
	I0718 20:52:28.790901    5178 status.go:330] ha-256000 host status = "Running" (err=<nil>)
	I0718 20:52:28.790915    5178 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:52:28.791024    5178 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:52:28.791139    5178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:52:28.791149    5178 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:52:28.820567    5178 ssh_runner.go:195] Run: systemctl --version
	I0718 20:52:28.822895    5178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:52:28.829106    5178 kubeconfig.go:125] found "ha-256000" server: "https://192.168.105.254:8443"
	I0718 20:52:28.829124    5178 api_server.go:166] Checking apiserver status ...
	I0718 20:52:28.829147    5178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:52:28.834512    5178 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1991/cgroup
	W0718 20:52:28.838005    5178 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1991/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0718 20:52:28.838038    5178 ssh_runner.go:195] Run: ls
	I0718 20:52:28.839554    5178 api_server.go:253] Checking apiserver healthz at https://192.168.105.254:8443/healthz ...
	I0718 20:52:28.842976    5178 api_server.go:279] https://192.168.105.254:8443/healthz returned 200:
	ok
	I0718 20:52:28.842986    5178 status.go:422] ha-256000 apiserver status = Running (err=<nil>)
	I0718 20:52:28.842991    5178 status.go:257] ha-256000 status: &{Name:ha-256000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:52:28.843006    5178 status.go:255] checking status of ha-256000-m02 ...
	I0718 20:52:28.844079    5178 status.go:330] ha-256000-m02 host status = "Running" (err=<nil>)
	I0718 20:52:28.844088    5178 host.go:66] Checking if "ha-256000-m02" exists ...
	I0718 20:52:28.844205    5178 host.go:66] Checking if "ha-256000-m02" exists ...
	I0718 20:52:28.844320    5178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:52:28.844326    5178 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:52:28.872887    5178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:52:28.879542    5178 kubeconfig.go:125] found "ha-256000" server: "https://192.168.105.254:8443"
	I0718 20:52:28.879553    5178 api_server.go:166] Checking apiserver status ...
	I0718 20:52:28.879582    5178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:52:28.885667    5178 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1957/cgroup
	W0718 20:52:28.889507    5178 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1957/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0718 20:52:28.889554    5178 ssh_runner.go:195] Run: ls
	I0718 20:52:28.891164    5178 api_server.go:253] Checking apiserver healthz at https://192.168.105.254:8443/healthz ...
	I0718 20:52:28.893814    5178 api_server.go:279] https://192.168.105.254:8443/healthz returned 200:
	ok
	I0718 20:52:28.893823    5178 status.go:422] ha-256000-m02 apiserver status = Running (err=<nil>)
	I0718 20:52:28.893827    5178 status.go:257] ha-256000-m02 status: &{Name:ha-256000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:52:28.893835    5178 status.go:255] checking status of ha-256000-m03 ...
	I0718 20:52:28.894531    5178 status.go:330] ha-256000-m03 host status = "Running" (err=<nil>)
	I0718 20:52:28.894537    5178 host.go:66] Checking if "ha-256000-m03" exists ...
	I0718 20:52:28.894632    5178 host.go:66] Checking if "ha-256000-m03" exists ...
	I0718 20:52:28.894744    5178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:52:28.894752    5178 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:52:28.918722    5178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:52:28.924813    5178 kubeconfig.go:125] found "ha-256000" server: "https://192.168.105.254:8443"
	I0718 20:52:28.924824    5178 api_server.go:166] Checking apiserver status ...
	I0718 20:52:28.924846    5178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0718 20:52:28.929860    5178 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0718 20:52:28.929869    5178 status.go:422] ha-256000-m03 apiserver status = Stopped (err=<nil>)
	I0718 20:52:28.929874    5178 status.go:257] ha-256000-m03 status: &{Name:ha-256000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:52:28.929881    5178 status.go:255] checking status of ha-256000-m04 ...
	I0718 20:52:28.930767    5178 status.go:330] ha-256000-m04 host status = "Running" (err=<nil>)
	I0718 20:52:28.930774    5178 host.go:66] Checking if "ha-256000-m04" exists ...
	I0718 20:52:28.930865    5178 host.go:66] Checking if "ha-256000-m04" exists ...
	I0718 20:52:28.930971    5178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:52:28.930976    5178 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m04/id_rsa Username:docker}
	I0718 20:52:28.959675    5178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:52:28.965904    5178 status.go:257] ha-256000-m04 status: &{Name:ha-256000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-256000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ha-256000 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.105.1           |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.105.1           |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-256000 -v=7                | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:52 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:36:07
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:36:07.154539    4727 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:36:07.154652    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154655    4727 out.go:304] Setting ErrFile to fd 2...
	I0718 20:36:07.154657    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154787    4727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:36:07.155777    4727 out.go:298] Setting JSON to false
	I0718 20:36:07.172062    4727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2135,"bootTime":1721358032,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:36:07.172136    4727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:36:07.175769    4727 out.go:177] * [ha-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 20:36:07.182867    4727 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:36:07.182897    4727 notify.go:220] Checking for updates...
	I0718 20:36:07.188814    4727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:07.191895    4727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:36:07.192950    4727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:36:07.195871    4727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 20:36:07.198897    4727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:36:07.202011    4727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:36:07.205826    4727 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 20:36:07.212869    4727 start.go:297] selected driver: qemu2
	I0718 20:36:07.212875    4727 start.go:901] validating driver "qemu2" against <nil>
	I0718 20:36:07.212880    4727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:36:07.215027    4727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:36:07.217921    4727 out.go:177] * Automatically selected the socket_vmnet network
	I0718 20:36:07.220933    4727 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:36:07.220960    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:07.220968    4727 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0718 20:36:07.220971    4727 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0718 20:36:07.220995    4727 start.go:340] cluster config:
	{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:07.224405    4727 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:36:07.231878    4727 out.go:177] * Starting "ha-256000" primary control-plane node in "ha-256000" cluster
	I0718 20:36:07.235849    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:07.235880    4727 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 20:36:07.235892    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:07.235960    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:07.235965    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:07.236167    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:07.236181    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json: {Name:mk4f96c33b167a65b92bd4e48e5f1a3c7a52bbe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:07.236387    4727 start.go:360] acquireMachinesLock for ha-256000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:07.236422    4727 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "ha-256000"
	I0718 20:36:07.236432    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:07.236461    4727 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 20:36:07.243901    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:07.268930    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:07.268958    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:07.269026    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:07.269056    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269065    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269104    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:07.269127    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269136    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269466    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:07.395393    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:07.434010    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:07.434014    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:07.434195    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.445169    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.445186    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.445241    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2 +20000M
	I0718 20:36:07.453205    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:07.453220    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.453236    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.453239    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:07.453248    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:07.453278    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:e3:ed:16:92:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.491921    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.491947    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.491951    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:07.491963    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:07.492029    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:07.492048    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:07.492054    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:07.492061    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:07.492067    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:09.494175    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:09.494254    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:09.494618    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:09.494729    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:09.494764    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:09.494789    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:09.494817    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:11.496994    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:11.497242    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:11.497663    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:11.497717    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:11.497756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:11.497787    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:11.497819    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:13.500006    4727 main.go:141] libmachine: Attempt 3
	I0718 20:36:13.500080    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:13.500185    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:13.500200    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:13.500205    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:13.500210    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:13.500216    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:15.502208    4727 main.go:141] libmachine: Attempt 4
	I0718 20:36:15.502220    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:15.502255    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:15.502275    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:15.502280    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:15.502285    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:15.502290    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:17.504286    4727 main.go:141] libmachine: Attempt 5
	I0718 20:36:17.504293    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:17.504346    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:17.504356    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:17.504360    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:17.504364    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:17.504369    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:19.506369    4727 main.go:141] libmachine: Attempt 6
	I0718 20:36:19.506395    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:19.506467    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:19.506476    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:19.506481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:19.506485    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:19.506490    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:21.508527    4727 main.go:141] libmachine: Attempt 7
	I0718 20:36:21.508554    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:21.508694    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:21.508708    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:21.508719    4727 main.go:141] libmachine: Found match: 6a:e3:ed:16:92:d5
	I0718 20:36:21.508730    4727 main.go:141] libmachine: IP: 192.168.105.5
	I0718 20:36:21.508735    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0718 20:36:22.527247    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:36:22.527480    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.527975    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.527990    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:36:22.610697    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:36:22.610726    4727 buildroot.go:166] provisioning hostname "ha-256000"
	I0718 20:36:22.610824    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.611097    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.611107    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000 && echo "ha-256000" | sudo tee /etc/hostname
	I0718 20:36:22.682492    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000
	
	I0718 20:36:22.682552    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.682702    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.682713    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:36:22.742479    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:36:22.742492    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:36:22.742500    4727 buildroot.go:174] setting up certificates
	I0718 20:36:22.742504    4727 provision.go:84] configureAuth start
	I0718 20:36:22.742508    4727 provision.go:143] copyHostCerts
	I0718 20:36:22.742542    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742586    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:36:22.742593    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742831    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:36:22.743010    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743030    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:36:22.743033    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743097    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:36:22.743184    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743212    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:36:22.743215    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743275    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:36:22.743373    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000 san=[127.0.0.1 192.168.105.5 ha-256000 localhost minikube]
	I0718 20:36:22.831924    4727 provision.go:177] copyRemoteCerts
	I0718 20:36:22.831953    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:36:22.831960    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:22.861471    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:36:22.861517    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:36:22.869576    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:36:22.869616    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0718 20:36:22.877642    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:36:22.877682    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 20:36:22.885597    4727 provision.go:87] duration metric: took 143.091583ms to configureAuth
	I0718 20:36:22.885605    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:36:22.885700    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:22.885731    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.885814    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.885819    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:36:22.939257    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:36:22.939268    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:36:22.939327    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:36:22.939382    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.939495    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.939529    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:36:22.999120    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:36:22.999176    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.999299    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.999307    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:36:24.399001    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:36:24.399014    4727 machine.go:97] duration metric: took 1.871786709s to provisionDockerMachine
	I0718 20:36:24.399020    4727 client.go:171] duration metric: took 17.130530167s to LocalClient.Create
	I0718 20:36:24.399035    4727 start.go:167] duration metric: took 17.130580916s to libmachine.API.Create "ha-256000"
	I0718 20:36:24.399041    4727 start.go:293] postStartSetup for "ha-256000" (driver="qemu2")
	I0718 20:36:24.399047    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:36:24.399133    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:36:24.399144    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.429882    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:36:24.431446    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:36:24.431458    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:36:24.431559    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:36:24.431674    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:36:24.431679    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:36:24.431800    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:36:24.434949    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:24.443099    4727 start.go:296] duration metric: took 44.054208ms for postStartSetup
	I0718 20:36:24.443547    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:24.443727    4727 start.go:128] duration metric: took 17.207737166s to createHost
	I0718 20:36:24.443753    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:24.443841    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:24.443845    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:36:24.496185    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360184.183489336
	
	I0718 20:36:24.496191    4727 fix.go:216] guest clock: 1721360184.183489336
	I0718 20:36:24.496195    4727 fix.go:229] Guest: 2024-07-18 20:36:24.183489336 -0700 PDT Remote: 2024-07-18 20:36:24.44373 -0700 PDT m=+17.308254043 (delta=-260.240664ms)
	I0718 20:36:24.496206    4727 fix.go:200] guest clock delta is within tolerance: -260.240664ms
	I0718 20:36:24.496210    4727 start.go:83] releasing machines lock for "ha-256000", held for 17.260259709s
	I0718 20:36:24.496487    4727 ssh_runner.go:195] Run: cat /version.json
	I0718 20:36:24.496496    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.498161    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:36:24.498180    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.526501    4727 ssh_runner.go:195] Run: systemctl --version
	I0718 20:36:24.575612    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0718 20:36:24.577665    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:36:24.577696    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:36:24.584047    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:36:24.584056    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.584135    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.590860    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:36:24.594365    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:36:24.597804    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.597834    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:36:24.601501    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.605402    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:36:24.609279    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.613150    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:36:24.616783    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:36:24.620826    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:36:24.624868    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:36:24.628746    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:36:24.632406    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:36:24.635998    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:24.719937    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:36:24.727107    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.727172    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:36:24.734556    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.745145    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:36:24.752682    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.758405    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.763722    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:36:24.804424    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.810784    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.817505    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:36:24.818968    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:36:24.822004    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:36:24.827814    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:36:24.912234    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:36:24.993893    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.993951    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:36:25.000295    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:25.079893    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:27.267877    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.188026583s)
	I0718 20:36:27.267954    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:36:27.273388    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:36:27.280952    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.286424    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:36:27.376871    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:36:27.462186    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.546490    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:36:27.553023    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.558470    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.643444    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:36:27.668876    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:36:27.669018    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:36:27.671231    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:36:27.671271    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:36:27.672746    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:36:27.689183    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:36:27.689243    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.699313    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.710299    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:36:27.710436    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:36:27.711936    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:27.716497    4727 kubeadm.go:883] updating cluster {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0718 20:36:27.716547    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:27.716590    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:27.721193    4727 docker.go:685] Got preloaded images: 
	I0718 20:36:27.721201    4727 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0718 20:36:27.721249    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:27.725068    4727 ssh_runner.go:195] Run: which lz4
	I0718 20:36:27.726303    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0718 20:36:27.726385    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0718 20:36:27.727841    4727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0718 20:36:27.727857    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335411903 bytes)
	I0718 20:36:29.032881    4727 docker.go:649] duration metric: took 1.306555792s to copy over tarball
	I0718 20:36:29.032945    4727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0718 20:36:30.077797    4727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.044866416s)
	I0718 20:36:30.077812    4727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0718 20:36:30.092929    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:30.096929    4727 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0718 20:36:30.102897    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:30.190133    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:32.408215    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.218126791s)
	I0718 20:36:32.408325    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:32.414564    4727 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 20:36:32.414576    4727 cache_images.go:84] Images are preloaded, skipping loading
	I0718 20:36:32.414588    4727 kubeadm.go:934] updating node { 192.168.105.5 8443 v1.30.3 docker true true} ...
	I0718 20:36:32.414662    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:36:32.414717    4727 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 20:36:32.422967    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:32.422975    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:32.422989    4727 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0718 20:36:32.423001    4727 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-256000 NodeName:ha-256000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 20:36:32.423064    4727 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-256000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 20:36:32.423074    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:36:32.423127    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:36:32.430238    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:36:32.430293    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:36:32.430329    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:36:32.433734    4727 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 20:36:32.433764    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0718 20:36:32.437628    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0718 20:36:32.443760    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:36:32.449483    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0718 20:36:32.455815    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0718 20:36:32.461759    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:36:32.463168    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:32.467182    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:32.556522    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:36:32.567007    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.5
	I0718 20:36:32.567019    4727 certs.go:194] generating shared ca certs ...
	I0718 20:36:32.567029    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.567195    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:36:32.567242    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:36:32.567249    4727 certs.go:256] generating profile certs ...
	I0718 20:36:32.567287    4727 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:36:32.567299    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt with IP's: []
	I0718 20:36:32.629331    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt ...
	I0718 20:36:32.629341    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt: {Name:mkc9c3e562115edef8b85e012e81a3eb4a2cf75a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key ...
	I0718 20:36:32.629649    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key: {Name:mkb41caa35d055a2dcb04d364862addacfff33bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629781    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4
	I0718 20:36:32.629789    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.254]
	I0718 20:36:32.695617    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 ...
	I0718 20:36:32.695626    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4: {Name:mkee89910ca1db08ac083863b0e4a027ae270203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696056    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 ...
	I0718 20:36:32.696061    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4: {Name:mk8365902b4e9f071c9404629a4b35cc6ca6ebbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696198    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:36:32.696306    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:36:32.696557    4727 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:36:32.696565    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt with IP's: []
	I0718 20:36:32.762976    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt ...
	I0718 20:36:32.762980    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt: {Name:mkb3e0281e7ef362624ad24bb17cfb244b9bc171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763112    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key ...
	I0718 20:36:32.763115    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key: {Name:mkc06a04ddb3616913d2c6f5647bad25fef6f42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763224    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:36:32.763237    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:36:32.763247    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:36:32.763257    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:36:32.763268    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:36:32.763279    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:36:32.763290    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:36:32.763301    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:36:32.763382    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:36:32.763410    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:36:32.763415    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:36:32.763434    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:36:32.763451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:36:32.763468    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:36:32.763505    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:32.763524    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.763535    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.763546    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.763807    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:36:32.773281    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:36:32.781447    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:36:32.789770    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:36:32.798040    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0718 20:36:32.806232    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:36:32.814458    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:36:32.822522    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:36:32.830515    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:36:32.838566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:36:32.846581    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:36:32.854568    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 20:36:32.860769    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:36:32.863035    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:36:32.867352    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868859    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868879    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.870984    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:36:32.874504    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:36:32.878096    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879659    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879678    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.881640    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:36:32.885559    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:36:32.889461    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891114    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891133    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.893171    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:36:32.897112    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:36:32.898621    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:36:32.898660    4727 kubeadm.go:392] StartCluster: {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clus
terName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:32.898726    4727 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 20:36:32.903849    4727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 20:36:32.907545    4727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 20:36:32.910740    4727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 20:36:32.914021    4727 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 20:36:32.914030    4727 kubeadm.go:157] found existing configuration files:
	
	I0718 20:36:32.914050    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0718 20:36:32.917254    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 20:36:32.917277    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 20:36:32.920874    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0718 20:36:32.924549    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 20:36:32.924574    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 20:36:32.928189    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.931542    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 20:36:32.931572    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.934804    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0718 20:36:32.937825    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 20:36:32.937847    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 20:36:32.941208    4727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0718 20:36:32.964473    4727 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0718 20:36:32.964502    4727 kubeadm.go:310] [preflight] Running pre-flight checks
	I0718 20:36:33.010272    4727 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 20:36:33.010346    4727 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 20:36:33.010394    4727 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 20:36:33.080896    4727 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 20:36:33.088116    4727 out.go:204]   - Generating certificates and keys ...
	I0718 20:36:33.088149    4727 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0718 20:36:33.088180    4727 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0718 20:36:33.187618    4727 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0718 20:36:33.225765    4727 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0718 20:36:33.439485    4727 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0718 20:36:33.599214    4727 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0718 20:36:33.681357    4727 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0718 20:36:33.681418    4727 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.726840    4727 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0718 20:36:33.726901    4727 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.875169    4727 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0718 20:36:34.071575    4727 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0718 20:36:34.163748    4727 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0718 20:36:34.163778    4727 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 20:36:34.260583    4727 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 20:36:34.352375    4727 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0718 20:36:34.395125    4727 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 20:36:34.512349    4727 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 20:36:34.655223    4727 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 20:36:34.655381    4727 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 20:36:34.656483    4727 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 20:36:34.666848    4727 out.go:204]   - Booting up control plane ...
	I0718 20:36:34.666901    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 20:36:34.666950    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 20:36:34.666982    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 20:36:34.667031    4727 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 20:36:34.667081    4727 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 20:36:34.667103    4727 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0718 20:36:34.759306    4727 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0718 20:36:34.759350    4727 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0718 20:36:35.263383    4727 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.7975ms
	I0718 20:36:35.263624    4727 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0718 20:36:38.766721    4727 kubeadm.go:310] [api-check] The API server is healthy after 3.504642043s
	I0718 20:36:38.772139    4727 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 20:36:38.775784    4727 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 20:36:38.782114    4727 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0718 20:36:38.782191    4727 kubeadm.go:310] [mark-control-plane] Marking the node ha-256000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 20:36:38.784595    4727 kubeadm.go:310] [bootstrap-token] Using token: yv8fsh.sh51yi31jewcw15j
	I0718 20:36:38.788784    4727 out.go:204]   - Configuring RBAC rules ...
	I0718 20:36:38.788835    4727 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 20:36:38.790051    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 20:36:38.796261    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 20:36:38.797188    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 20:36:38.797986    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 20:36:38.798957    4727 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 20:36:39.169725    4727 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 20:36:39.576005    4727 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0718 20:36:40.169284    4727 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0718 20:36:40.169608    4727 kubeadm.go:310] 
	I0718 20:36:40.169641    4727 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0718 20:36:40.169646    4727 kubeadm.go:310] 
	I0718 20:36:40.169692    4727 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0718 20:36:40.169695    4727 kubeadm.go:310] 
	I0718 20:36:40.169709    4727 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0718 20:36:40.169760    4727 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 20:36:40.169794    4727 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 20:36:40.169797    4727 kubeadm.go:310] 
	I0718 20:36:40.169826    4727 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0718 20:36:40.169830    4727 kubeadm.go:310] 
	I0718 20:36:40.169856    4727 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 20:36:40.169858    4727 kubeadm.go:310] 
	I0718 20:36:40.169883    4727 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0718 20:36:40.169938    4727 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 20:36:40.169984    4727 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 20:36:40.169987    4727 kubeadm.go:310] 
	I0718 20:36:40.170044    4727 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0718 20:36:40.170090    4727 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0718 20:36:40.170093    4727 kubeadm.go:310] 
	I0718 20:36:40.170134    4727 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170222    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc \
	I0718 20:36:40.170234    4727 kubeadm.go:310] 	--control-plane 
	I0718 20:36:40.170242    4727 kubeadm.go:310] 
	I0718 20:36:40.170285    4727 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0718 20:36:40.170299    4727 kubeadm.go:310] 
	I0718 20:36:40.170351    4727 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170426    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc 
	I0718 20:36:40.170492    4727 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 20:36:40.170502    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:40.170507    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:40.176555    4727 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0718 20:36:40.183616    4727 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0718 20:36:40.185686    4727 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0718 20:36:40.185696    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0718 20:36:40.191764    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0718 20:36:40.332259    4727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 20:36:40.332307    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.332337    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000 minikube.k8s.io/updated_at=2024_07_18T20_36_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=true
	I0718 20:36:40.385331    4727 ops.go:34] apiserver oom_adj: -16
	I0718 20:36:40.385383    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.887435    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.387480    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.887395    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.387370    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.885756    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.387374    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.886101    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.386656    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.887355    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.387330    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.887331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.386668    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.886398    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.385335    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.887237    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.387224    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.887271    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.387175    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.885647    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.387168    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.887214    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.387158    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.887129    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.387127    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.887088    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.387119    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.885301    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.387061    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.453749    4727 kubeadm.go:1113] duration metric: took 14.12187225s to wait for elevateKubeSystemPrivileges
	I0718 20:36:54.453766    4727 kubeadm.go:394] duration metric: took 21.55570275s to StartCluster
	I0718 20:36:54.453776    4727 settings.go:142] acquiring lock: {Name:mk9577e2a46ebc5e017130011eb528f9fea1ed10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.453868    4727 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.454239    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.454483    4727 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.454492    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:36:54.454494    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0718 20:36:54.454496    4727 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 20:36:54.454530    4727 addons.go:69] Setting storage-provisioner=true in profile "ha-256000"
	I0718 20:36:54.454533    4727 addons.go:69] Setting default-storageclass=true in profile "ha-256000"
	I0718 20:36:54.454543    4727 addons.go:234] Setting addon storage-provisioner=true in "ha-256000"
	I0718 20:36:54.454546    4727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-256000"
	I0718 20:36:54.454554    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.454722    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.455342    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.455486    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 20:36:54.455762    4727 cert_rotation.go:137] Starting client certificate rotation controller
	I0718 20:36:54.455811    4727 addons.go:234] Setting addon default-storageclass=true in "ha-256000"
	I0718 20:36:54.455823    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.460675    4727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 20:36:54.464747    4727 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.464758    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0718 20:36:54.464769    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.465436    4727 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.465440    4727 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0718 20:36:54.465444    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.511774    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.706626    4727 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0718 20:36:54.777305    4727 round_trippers.go:463] GET https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0718 20:36:54.777314    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.777318    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.777321    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.782732    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:36:54.783013    4727 round_trippers.go:463] PUT https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0718 20:36:54.783019    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.783023    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.783026    4727 round_trippers.go:473]     Content-Type: application/json
	I0718 20:36:54.783028    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.784014    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:36:54.792272    4727 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0718 20:36:54.793579    4727 addons.go:510] duration metric: took 339.092083ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0718 20:36:54.793593    4727 start.go:246] waiting for cluster config update ...
	I0718 20:36:54.793600    4727 start.go:255] writing updated cluster config ...
	I0718 20:36:54.798143    4727 out.go:177] 
	I0718 20:36:54.802340    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.802369    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.805206    4727 out.go:177] * Starting "ha-256000-m02" control-plane node in "ha-256000" cluster
	I0718 20:36:54.813295    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:54.813304    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:54.813383    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:54.813389    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:54.813425    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.813828    4727 start.go:360] acquireMachinesLock for ha-256000-m02: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:54.813863    4727 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "ha-256000-m02"
	I0718 20:36:54.813872    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:tr
ue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.813899    4727 start.go:125] createHost starting for "m02" (driver="qemu2")
	I0718 20:36:54.818236    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:54.833731    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:54.833754    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:54.833854    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:54.833891    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833898    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.833936    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:54.833959    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833965    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.834273    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:54.991167    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:55.074302    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:55.074313    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:55.074505    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.084177    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.084198    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.084247    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2 +20000M
	I0718 20:36:55.092640    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:55.092655    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.092668    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.092672    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:55.092685    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:55.092723    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e8:07:38:73:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.131373    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.131397    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.131401    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:55.131414    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:55.131476    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:55.131491    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:55.131496    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:55.131509    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:55.131515    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:55.131521    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:57.132241    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:57.132260    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:57.132370    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:57.132380    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:57.132387    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:57.132391    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:57.132399    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:57.132403    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:59.134429    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:59.134514    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:59.134610    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:59.134633    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:59.134640    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:59.134645    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:59.134650    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:59.134655    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:01.136704    4727 main.go:141] libmachine: Attempt 3
	I0718 20:37:01.136730    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:01.136864    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:01.136874    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:01.136879    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:01.136892    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:01.136897    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:01.136902    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:03.139087    4727 main.go:141] libmachine: Attempt 4
	I0718 20:37:03.139131    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:03.139262    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:03.139278    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:03.139286    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:03.139290    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:03.139295    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:03.139305    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:05.141342    4727 main.go:141] libmachine: Attempt 5
	I0718 20:37:05.141371    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:05.141487    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:05.141499    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:05.141504    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:05.141508    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:05.141513    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:05.141518    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:07.141729    4727 main.go:141] libmachine: Attempt 6
	I0718 20:37:07.141760    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:07.141844    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:07.141853    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:07.141858    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:07.141862    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:07.141866    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:07.141871    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:09.143893    4727 main.go:141] libmachine: Attempt 7
	I0718 20:37:09.143910    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:09.143997    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:37:09.144009    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:37:09.144011    4727 main.go:141] libmachine: Found match: 5a:e8:7:38:73:30
	I0718 20:37:09.144020    4727 main.go:141] libmachine: IP: 192.168.105.6
	I0718 20:37:09.144023    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0718 20:37:22.173394    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:37:22.173460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.173824    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.173832    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:37:22.224366    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:37:22.224379    4727 buildroot.go:166] provisioning hostname "ha-256000-m02"
	I0718 20:37:22.224437    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.224569    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.224574    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m02 && echo "ha-256000-m02" | sudo tee /etc/hostname
	I0718 20:37:22.281136    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m02
	
	I0718 20:37:22.281193    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.281326    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.281333    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:37:22.335405    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:37:22.335420    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:37:22.335427    4727 buildroot.go:174] setting up certificates
	I0718 20:37:22.335432    4727 provision.go:84] configureAuth start
	I0718 20:37:22.335436    4727 provision.go:143] copyHostCerts
	I0718 20:37:22.335460    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335499    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:37:22.335504    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335625    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:37:22.335755    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335793    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:37:22.335798    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335849    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:37:22.335937    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.335958    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:37:22.335961    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.336009    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:37:22.336098    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m02 san=[127.0.0.1 192.168.105.6 ha-256000-m02 localhost minikube]
	I0718 20:37:22.416839    4727 provision.go:177] copyRemoteCerts
	I0718 20:37:22.417292    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:37:22.417307    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:22.446250    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:37:22.446323    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:37:22.455193    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:37:22.455243    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:37:22.463182    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:37:22.463217    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:37:22.471841    4727 provision.go:87] duration metric: took 136.406375ms to configureAuth
	I0718 20:37:22.471860    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:37:22.472154    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:22.472192    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.472306    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.472312    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:37:22.520570    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:37:22.520580    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:37:22.520661    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:37:22.520720    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.520835    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.520884    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:37:22.573905    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:37:22.573954    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.574074    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.574082    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:37:23.946918    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:37:23.946932    4727 machine.go:97] duration metric: took 1.773574458s to provisionDockerMachine
	I0718 20:37:23.946948    4727 client.go:171] duration metric: took 29.113993584s to LocalClient.Create
	I0718 20:37:23.946964    4727 start.go:167] duration metric: took 29.114041166s to libmachine.API.Create "ha-256000"
	I0718 20:37:23.946968    4727 start.go:293] postStartSetup for "ha-256000-m02" (driver="qemu2")
	I0718 20:37:23.946975    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:37:23.947049    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:37:23.947059    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:23.975789    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:37:23.977316    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:37:23.977325    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:37:23.977414    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:37:23.977531    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:37:23.977538    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:37:23.977667    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:37:23.981129    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:23.989836    4727 start.go:296] duration metric: took 42.86225ms for postStartSetup
	I0718 20:37:23.990279    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:37:23.990466    4727 start.go:128] duration metric: took 29.177367125s to createHost
	I0718 20:37:23.990492    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:23.990582    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:23.990587    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:37:24.039991    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360244.056265969
	
	I0718 20:37:24.040003    4727 fix.go:216] guest clock: 1721360244.056265969
	I0718 20:37:24.040011    4727 fix.go:229] Guest: 2024-07-18 20:37:24.056265969 -0700 PDT Remote: 2024-07-18 20:37:23.990469 -0700 PDT m=+76.856635126 (delta=65.796969ms)
	I0718 20:37:24.040021    4727 fix.go:200] guest clock delta is within tolerance: 65.796969ms
	I0718 20:37:24.040027    4727 start.go:83] releasing machines lock for "ha-256000-m02", held for 29.226966s
	I0718 20:37:24.045188    4727 out.go:177] * Found network options:
	I0718 20:37:24.048256    4727 out.go:177]   - NO_PROXY=192.168.105.5
	W0718 20:37:24.052331    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:37:24.052639    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:37:24.052695    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:37:24.052702    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:24.052696    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:37:24.052803    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	W0718 20:37:24.080701    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:37:24.080760    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:37:24.120864    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:37:24.120877    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.120944    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.128913    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:37:24.133095    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:37:24.137320    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.137368    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:37:24.141513    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.145685    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:37:24.149674    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.153524    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:37:24.157504    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:37:24.161442    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:37:24.165217    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:37:24.169715    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:37:24.173504    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:37:24.177428    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.249585    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:37:24.258814    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.258889    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:37:24.266134    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.272789    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:37:24.282701    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.287831    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.293394    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:37:24.332150    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.338444    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.344970    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:37:24.346508    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:37:24.349662    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:37:24.355683    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:37:24.439008    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:37:24.522884    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.522913    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:37:24.529269    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.614408    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:37:26.705797    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.091426708s)
	I0718 20:37:26.705868    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:37:26.711797    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:37:26.719055    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.724747    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:37:26.813533    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:37:26.893596    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:26.965581    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:37:26.972962    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.978785    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:27.061213    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:37:27.087585    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:37:27.087659    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:37:27.091046    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:37:27.091097    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:37:27.092542    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:37:27.112215    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:37:27.112278    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.124950    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.136592    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:37:27.145555    4727 out.go:177]   - env NO_PROXY=192.168.105.5
	I0718 20:37:27.149713    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:37:27.151201    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:27.155414    4727 mustload.go:65] Loading cluster: ha-256000
	I0718 20:37:27.155551    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:27.156066    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:27.156157    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.6
	I0718 20:37:27.156161    4727 certs.go:194] generating shared ca certs ...
	I0718 20:37:27.156167    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.156269    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:37:27.156316    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:37:27.156321    4727 certs.go:256] generating profile certs ...
	I0718 20:37:27.156387    4727 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:37:27.156400    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9
	I0718 20:37:27.156410    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.6 192.168.105.254]
	I0718 20:37:27.328161    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 ...
	I0718 20:37:27.328188    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9: {Name:mkff536dfdabd0cc9a693525dd142a97006d4485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 ...
	I0718 20:37:27.328655    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9: {Name:mkb963d77aed955311589ae3cd9371dca3b50bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328816    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:37:27.328945    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:37:27.329100    4727 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:37:27.329110    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:37:27.329125    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:37:27.329137    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:37:27.329150    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:37:27.329162    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:37:27.329176    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:37:27.329186    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:37:27.329197    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:37:27.329271    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:37:27.329299    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:37:27.329305    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:37:27.329347    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:37:27.329372    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:37:27.329396    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:37:27.329451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:27.329478    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.329491    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.329501    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.329519    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:27.355925    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0718 20:37:27.357647    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0718 20:37:27.362088    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0718 20:37:27.363733    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0718 20:37:27.367759    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0718 20:37:27.369261    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0718 20:37:27.373839    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0718 20:37:27.375475    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0718 20:37:27.379174    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0718 20:37:27.380628    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0718 20:37:27.384809    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0718 20:37:27.386562    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0718 20:37:27.390606    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:37:27.399865    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:37:27.408308    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:37:27.416747    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:37:27.425050    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0718 20:37:27.433244    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:37:27.441306    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:37:27.449446    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:37:27.457566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:37:27.465676    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:37:27.473743    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:37:27.482174    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0718 20:37:27.487947    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0718 20:37:27.493902    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0718 20:37:27.499712    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0718 20:37:27.505265    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0718 20:37:27.511047    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0718 20:37:27.517340    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0718 20:37:27.523229    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:37:27.525438    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:37:27.529080    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530597    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530617    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.532775    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:37:27.536483    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:37:27.540031    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541631    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541649    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.543631    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:37:27.547571    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:37:27.551419    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553057    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553079    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.555162    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:37:27.559227    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:37:27.560725    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:37:27.560754    4727 kubeadm.go:934] updating node {m02 192.168.105.6 8443 v1.30.3 docker true true} ...
	I0718 20:37:27.560799    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:37:27.560814    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:37:27.560837    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:37:27.572539    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:37:27.572577    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:37:27.572623    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.576082    4727 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0718 20:37:27.576121    4727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm
	I0718 20:37:27.579785    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet
	I0718 20:37:34.561853    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.561928    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.564073    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0718 20:37:34.564095    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (49938584 bytes)
	I0718 20:37:35.510887    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.510952    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.512864    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0718 20:37:35.512884    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (48955544 bytes)
	I0718 20:37:42.606961    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:37:42.613080    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.613168    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.614817    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0718 20:37:42.614833    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (96467384 bytes)
	I0718 20:37:43.119287    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0718 20:37:43.122637    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0718 20:37:43.128732    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:37:43.134516    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0718 20:37:43.141275    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:37:43.142606    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:43.146857    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:43.230113    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:37:43.243145    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:43.243333    4727 start.go:317] joinCluster: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluste
rName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:37:43.243382    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0718 20:37:43.243391    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:43.371073    4727 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:37:43.371092    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443"
	I0718 20:38:03.232381    4727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443": (19.861822375s)
	I0718 20:38:03.232396    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0718 20:38:03.485331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000-m02 minikube.k8s.io/updated_at=2024_07_18T20_38_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=false
	I0718 20:38:03.530961    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-256000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0718 20:38:03.578648    4727 start.go:319] duration metric: took 20.3358655s to joinCluster
	I0718 20:38:03.578688    4727 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:03.578898    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:03.583884    4727 out.go:177] * Verifying Kubernetes components...
	I0718 20:38:03.590972    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:03.702999    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:38:03.709797    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:38:03.709929    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0718 20:38:03.709957    4727 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.105.254:8443 with https://192.168.105.5:8443
	I0718 20:38:03.710058    4727 node_ready.go:35] waiting up to 6m0s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:03.710093    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:03.710097    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:03.710101    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:03.710109    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:03.716299    4727 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0718 20:38:04.212157    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.212175    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.212180    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.212182    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.217870    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:38:04.711681    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.711692    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.711696    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.711698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.713463    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.212138    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.212149    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.212153    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.212156    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.214175    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:05.711331    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.711345    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.711360    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.711363    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.712682    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.713155    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:06.210250    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.210264    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.210268    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.210271    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.212254    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:06.711235    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.711255    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.711260    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.711262    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.712940    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.212089    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.212100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.212104    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.212106    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.214317    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:07.712070    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.712079    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.712083    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.712086    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.713825    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.714102    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:08.211862    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.211878    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.211883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.211885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.213993    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:08.712062    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.712075    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.712079    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.712081    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.713753    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.212027    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.212036    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.212052    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.212055    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.213833    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.712020    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.712029    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.712033    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.712035    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.713439    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.212016    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.212025    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.212029    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.212031    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.213662    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.213924    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:10.711085    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.711100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.711114    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.711117    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.712848    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.211980    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.211995    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.211999    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.212002    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.213760    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.711981    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.711994    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.712005    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.712008    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.713435    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.211955    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.211969    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.211974    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.211976    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.213759    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.214202    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:12.711912    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.711929    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.711933    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.711935    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.713382    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.211920    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.211932    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.211941    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.211943    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.213828    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.711194    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.711206    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.711209    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.711211    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.712757    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:14.211901    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.211919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.211924    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.211932    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.213956    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:14.214285    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:14.711860    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.711876    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.711883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.711885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.713170    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.211895    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.211907    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.211911    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.211913    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.213693    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.711835    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.711849    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.711863    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.711865    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.713487    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.211839    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.211844    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.211846    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.213365    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.711659    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.711669    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.711673    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.711675    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.713252    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.713433    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:17.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.211830    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.211834    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.211836    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.213413    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:17.711756    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.711781    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.711785    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.711788    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.713341    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.211779    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.211794    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.211798    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.211800    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.213551    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.711749    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.711759    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.711764    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.711766    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.713325    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.713645    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:19.211738    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.211750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.211754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.211756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.213507    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:19.711717    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.711731    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.711734    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.711736    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.713476    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.211230    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.211271    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.211314    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.211318    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.212922    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.710773    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.710783    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.710787    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.710790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.712163    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.211705    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.211717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.211738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.211742    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.213362    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.213898    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:21.711683    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.711698    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.711702    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.711704    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.713411    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.211928    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.211938    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.211942    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.211944    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.214292    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.214473    4727 node_ready.go:49] node "ha-256000-m02" has status "Ready":"True"
	I0718 20:38:22.214479    4727 node_ready.go:38] duration metric: took 18.50492425s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:22.214483    4727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:22.214513    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:22.214523    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.214528    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.214533    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.216823    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.221656    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.221688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gl7wn
	I0718 20:38:22.221691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.221695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.221698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.223037    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.223438    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.223443    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.223447    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.223449    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.224627    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.224906    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.224912    4727 pod_ready.go:81] duration metric: took 3.247917ms for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224916    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224935    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t5fk7
	I0718 20:38:22.224937    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.224950    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.224954    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.226106    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.226400    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.226404    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.226411    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.226414    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.227526    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.227886    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.227891    4727 pod_ready.go:81] duration metric: took 2.972458ms for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227894    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227913    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000
	I0718 20:38:22.227919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.227923    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.227925    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.228991    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.229395    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.229399    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.229402    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.229406    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.230465    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.230693    4727 pod_ready.go:92] pod "etcd-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.230699    4727 pod_ready.go:81] duration metric: took 2.801916ms for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230703    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230720    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000-m02
	I0718 20:38:22.230723    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.230726    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.230728    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.231834    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.232263    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.232268    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.232271    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.232273    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.233360    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.233783    4727 pod_ready.go:92] pod "etcd-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.233789    4727 pod_ready.go:81] duration metric: took 3.083416ms for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.233794    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.413762    4727 request.go:629] Waited for 179.941666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413824    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413828    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.413841    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.413846    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.415462    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.613785    4727 request.go:629] Waited for 197.877917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613838    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613844    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.613847    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.613849    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.616581    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.616806    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.616814    4727 pod_ready.go:81] duration metric: took 383.02725ms for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.616819    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.813743    4727 request.go:629] Waited for 196.894708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813781    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813784    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.813788    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.813790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.815511    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.012375    4727 request.go:629] Waited for 196.496584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012418    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012422    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.012426    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.012428    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.014100    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.014297    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.014304    4727 pod_ready.go:81] duration metric: took 397.4915ms for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.014308    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.213728    4727 request.go:629] Waited for 199.392916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213764    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213767    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.213771    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.213774    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.215292    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.412016    4727 request.go:629] Waited for 196.230667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412048    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412050    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.412055    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.412057    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.414117    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.414317    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.414324    4727 pod_ready.go:81] duration metric: took 400.022917ms for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.414329    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.613726    4727 request.go:629] Waited for 199.367083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613754    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613757    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.613760    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.613763    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.615829    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.813718    4727 request.go:629] Waited for 197.566667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813747    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.813754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.813756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.815391    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.815670    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.815679    4727 pod_ready.go:81] duration metric: took 401.357791ms for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.815685    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.013744    4727 request.go:629] Waited for 198.028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013777    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013780    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.013783    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.013785    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.015358    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.213717    4727 request.go:629] Waited for 197.87625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213750    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213772    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.213776    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.213779    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.215177    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.215486    4727 pod_ready.go:92] pod "kube-proxy-99sn4" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.215494    4727 pod_ready.go:81] duration metric: took 399.816291ms for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.215499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.412543    4727 request.go:629] Waited for 197.022333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412572    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412576    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.412580    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.412582    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.414200    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.613688    4727 request.go:629] Waited for 199.188292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613723    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613734    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.613738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.613740    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.616115    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:24.616487    4727 pod_ready.go:92] pod "kube-proxy-jxnv9" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.616495    4727 pod_ready.go:81] duration metric: took 401.003958ms for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.616499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.811999    4727 request.go:629] Waited for 195.4745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812037    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812040    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.812044    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.812046    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.813599    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.013712    4727 request.go:629] Waited for 199.880375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013743    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013746    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.013750    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.013752    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.015408    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.015677    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.015685    4727 pod_ready.go:81] duration metric: took 399.1935ms for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.015689    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.213690    4727 request.go:629] Waited for 197.964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213729    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213735    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.213739    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.213741    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.215582    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.413674    4727 request.go:629] Waited for 197.841584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413700    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413702    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.413714    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.413717    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.415433    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.415627    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.415633    4727 pod_ready.go:81] duration metric: took 399.951542ms for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.415638    4727 pod_ready.go:38] duration metric: took 3.201238458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:25.415647    4727 api_server.go:52] waiting for apiserver process to appear ...
	I0718 20:38:25.415719    4727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:38:25.421413    4727 api_server.go:72] duration metric: took 21.843316333s to wait for apiserver process to appear ...
	I0718 20:38:25.421422    4727 api_server.go:88] waiting for apiserver healthz status ...
	I0718 20:38:25.421429    4727 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0718 20:38:25.424174    4727 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0718 20:38:25.424198    4727 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0718 20:38:25.424200    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.424204    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.424207    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.424682    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:38:25.424723    4727 api_server.go:141] control plane version: v1.30.3
	I0718 20:38:25.424729    4727 api_server.go:131] duration metric: took 3.305084ms to wait for apiserver health ...
	I0718 20:38:25.424732    4727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0718 20:38:25.613673    4727 request.go:629] Waited for 188.916583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613714    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.613721    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.613723    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.616608    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:25.620463    4727 system_pods.go:59] 17 kube-system pods found
	I0718 20:38:25.620472    4727 system_pods.go:61] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:25.620475    4727 system_pods.go:61] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:25.620477    4727 system_pods.go:61] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:25.620479    4727 system_pods.go:61] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:25.620480    4727 system_pods.go:61] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:25.620482    4727 system_pods.go:61] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:25.620484    4727 system_pods.go:61] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:25.620486    4727 system_pods.go:61] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:25.620488    4727 system_pods.go:61] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:25.620490    4727 system_pods.go:61] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:25.620492    4727 system_pods.go:61] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:25.620493    4727 system_pods.go:61] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:25.620495    4727 system_pods.go:61] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:25.620497    4727 system_pods.go:61] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:25.620498    4727 system_pods.go:61] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:25.620500    4727 system_pods.go:61] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:25.620502    4727 system_pods.go:61] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:25.620505    4727 system_pods.go:74] duration metric: took 195.775375ms to wait for pod list to return data ...
	I0718 20:38:25.620509    4727 default_sa.go:34] waiting for default service account to be created ...
	I0718 20:38:25.813683    4727 request.go:629] Waited for 193.137584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813709    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813712    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.813716    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.813721    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.815354    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.815466    4727 default_sa.go:45] found service account: "default"
	I0718 20:38:25.815474    4727 default_sa.go:55] duration metric: took 194.966875ms for default service account to be created ...
	I0718 20:38:25.815479    4727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0718 20:38:26.013652    4727 request.go:629] Waited for 198.147166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.013695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.013702    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.016448    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:26.020596    4727 system_pods.go:86] 17 kube-system pods found
	I0718 20:38:26.020604    4727 system_pods.go:89] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:26.020607    4727 system_pods.go:89] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:26.020609    4727 system_pods.go:89] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:26.020611    4727 system_pods.go:89] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:26.020613    4727 system_pods.go:89] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:26.020615    4727 system_pods.go:89] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:26.020617    4727 system_pods.go:89] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:26.020619    4727 system_pods.go:89] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:26.020621    4727 system_pods.go:89] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:26.020622    4727 system_pods.go:89] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:26.020624    4727 system_pods.go:89] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:26.020626    4727 system_pods.go:89] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:26.020628    4727 system_pods.go:89] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:26.020629    4727 system_pods.go:89] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:26.020631    4727 system_pods.go:89] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:26.020633    4727 system_pods.go:89] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:26.020635    4727 system_pods.go:89] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:26.020641    4727 system_pods.go:126] duration metric: took 205.165291ms to wait for k8s-apps to be running ...
	I0718 20:38:26.020645    4727 system_svc.go:44] waiting for kubelet service to be running ....
	I0718 20:38:26.020720    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:38:26.027026    4727 system_svc.go:56] duration metric: took 6.37875ms WaitForService to wait for kubelet
	I0718 20:38:26.027036    4727 kubeadm.go:582] duration metric: took 22.448955791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:38:26.027047    4727 node_conditions.go:102] verifying NodePressure condition ...
	I0718 20:38:26.213670    4727 request.go:629] Waited for 186.592667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213748    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213751    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.213756    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.213758    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.215369    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:26.215702    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215710    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215716    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215719    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215721    4727 node_conditions.go:105] duration metric: took 188.677125ms to run NodePressure ...
	I0718 20:38:26.215733    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:38:26.215747    4727 start.go:255] writing updated cluster config ...
	I0718 20:38:26.221138    4727 out.go:177] 
	I0718 20:38:26.225195    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:26.225251    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.230070    4727 out.go:177] * Starting "ha-256000-m03" control-plane node in "ha-256000" cluster
	I0718 20:38:26.238085    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:38:26.238092    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:38:26.238177    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:38:26.238184    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:38:26.238226    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.238529    4727 start.go:360] acquireMachinesLock for ha-256000-m03: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:38:26.238563    4727 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "ha-256000-m03"
	I0718 20:38:26.238573    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:26.238613    4727 start.go:125] createHost starting for "m03" (driver="qemu2")
	I0718 20:38:26.243026    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:38:26.268172    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:38:26.268206    4727 client.go:168] LocalClient.Create starting
	I0718 20:38:26.268290    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:38:26.268328    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268338    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268376    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:38:26.268399    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268406    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268691    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:38:26.426584    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:38:26.572781    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:38:26.572789    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:38:26.573022    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.588299    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.588321    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.588408    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2 +20000M
	I0718 20:38:26.597072    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:38:26.597089    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.597102    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.597113    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:38:26.597129    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:38:26.597163    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:7f:0e:0c:6d:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.641473    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.641500    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.641504    4727 main.go:141] libmachine: Attempt 0
	I0718 20:38:26.641520    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:26.641735    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:26.641749    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:26.641756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:26.641761    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:26.641765    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:26.641770    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:26.641776    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:28.643878    4727 main.go:141] libmachine: Attempt 1
	I0718 20:38:28.643913    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:28.644011    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:28.644023    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:28.644028    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:28.644032    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:28.644036    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:28.644046    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:28.644052    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:30.646081    4727 main.go:141] libmachine: Attempt 2
	I0718 20:38:30.646120    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:30.646235    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:30.646244    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:30.646250    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:30.646254    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:30.646258    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:30.646262    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:30.646267    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:32.648349    4727 main.go:141] libmachine: Attempt 3
	I0718 20:38:32.648374    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:32.648466    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:32.648477    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:32.648481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:32.648486    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:32.648497    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:32.648501    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:32.648514    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:34.650548    4727 main.go:141] libmachine: Attempt 4
	I0718 20:38:34.650566    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:34.650664    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:34.650674    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:34.650678    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:34.650682    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:34.650686    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:34.650692    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:34.650696    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:36.652758    4727 main.go:141] libmachine: Attempt 5
	I0718 20:38:36.652796    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:36.652971    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:36.652995    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:36.653008    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:36.653088    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:36.653108    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:36.653113    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:36.653119    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:38.654089    4727 main.go:141] libmachine: Attempt 6
	I0718 20:38:38.654205    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:38.654304    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:38.654315    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:38.654320    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:38.654329    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:38.654333    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:38.654338    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:38.654343    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:40.656398    4727 main.go:141] libmachine: Attempt 7
	I0718 20:38:40.656425    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:40.656535    4727 main.go:141] libmachine: Found 7 entries in /var/db/dhcpd_leases!
	I0718 20:38:40.656552    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:d2:7f:e:c:6d:ba ID:1,d2:7f:e:c:6d:ba Lease:0x669b313f}
	I0718 20:38:40.656554    4727 main.go:141] libmachine: Found match: d2:7f:e:c:6d:ba
	I0718 20:38:40.656561    4727 main.go:141] libmachine: IP: 192.168.105.7
	I0718 20:38:40.656567    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.7)...
	I0718 20:38:49.679874    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:38:49.680098    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.680386    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.680393    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:38:49.720341    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:38:49.720352    4727 buildroot.go:166] provisioning hostname "ha-256000-m03"
	I0718 20:38:49.720396    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.720501    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.720507    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m03 && echo "ha-256000-m03" | sudo tee /etc/hostname
	I0718 20:38:49.765619    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m03
	
	I0718 20:38:49.765691    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.765821    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.765830    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:38:49.809445    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:38:49.809457    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:38:49.809463    4727 buildroot.go:174] setting up certificates
	I0718 20:38:49.809467    4727 provision.go:84] configureAuth start
	I0718 20:38:49.809471    4727 provision.go:143] copyHostCerts
	I0718 20:38:49.809497    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809560    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:38:49.809567    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809680    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:38:49.810515    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810551    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:38:49.810554    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810618    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:38:49.810856    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810884    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:38:49.810888    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810942    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:38:49.811128    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m03 san=[127.0.0.1 192.168.105.7 ha-256000-m03 localhost minikube]
	I0718 20:38:49.892392    4727 provision.go:177] copyRemoteCerts
	I0718 20:38:49.892426    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:38:49.892435    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:49.917004    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:38:49.917069    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:38:49.925760    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:38:49.925809    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:38:49.934495    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:38:49.934547    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:38:49.944465    4727 provision.go:87] duration metric: took 134.994083ms to configureAuth
	I0718 20:38:49.944477    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:38:49.946418    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:49.946460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.946554    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.946559    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:38:49.988863    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:38:49.988874    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:38:49.988957    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:38:49.989005    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.989117    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.989151    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	Environment="NO_PROXY=192.168.105.5,192.168.105.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:38:50.033434    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	Environment=NO_PROXY=192.168.105.5,192.168.105.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:38:50.033494    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:50.033609    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:50.033618    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:38:51.357934    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:38:51.357948    4727 machine.go:97] duration metric: took 1.678110291s to provisionDockerMachine
	I0718 20:38:51.357955    4727 client.go:171] duration metric: took 25.090436s to LocalClient.Create
	I0718 20:38:51.357970    4727 start.go:167] duration metric: took 25.090492834s to libmachine.API.Create "ha-256000"
	I0718 20:38:51.357987    4727 start.go:293] postStartSetup for "ha-256000-m03" (driver="qemu2")
	I0718 20:38:51.357993    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:38:51.358064    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:38:51.358075    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.383362    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:38:51.385220    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:38:51.385229    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:38:51.385339    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:38:51.385460    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:38:51.385466    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:38:51.385589    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:38:51.389076    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:38:51.397667    4727 start.go:296] duration metric: took 39.676333ms for postStartSetup
	I0718 20:38:51.398148    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:51.398353    4727 start.go:128] duration metric: took 25.1604295s to createHost
	I0718 20:38:51.398381    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:51.398475    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:51.398479    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:38:51.443684    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360331.726119547
	
	I0718 20:38:51.443697    4727 fix.go:216] guest clock: 1721360331.726119547
	I0718 20:38:51.443701    4727 fix.go:229] Guest: 2024-07-18 20:38:51.726119547 -0700 PDT Remote: 2024-07-18 20:38:51.39836 -0700 PDT m=+164.266937085 (delta=327.759547ms)
	I0718 20:38:51.443713    4727 fix.go:200] guest clock delta is within tolerance: 327.759547ms
	I0718 20:38:51.443716    4727 start.go:83] releasing machines lock for "ha-256000-m03", held for 25.205843709s
	I0718 20:38:51.447883    4727 out.go:177] * Found network options:
	I0718 20:38:51.451892    4727 out.go:177]   - NO_PROXY=192.168.105.5,192.168.105.6
	W0718 20:38:51.455815    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.455829    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456208    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456223    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:38:51.456298    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:38:51.456327    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	W0718 20:38:51.479804    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:38:51.479862    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:38:51.524774    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:38:51.524786    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.524847    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.531855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:38:51.535855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:38:51.539545    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.539580    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:38:51.543520    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.547437    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:38:51.551284    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.555870    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:38:51.559926    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:38:51.563772    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:38:51.567972    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:38:51.572324    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:38:51.576791    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:38:51.580307    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.641726    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:38:51.654538    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.654606    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:38:51.661500    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.671940    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:38:51.683005    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.689286    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.694846    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:38:51.739658    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.745604    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.752465    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:38:51.754039    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:38:51.757754    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:38:51.764400    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:38:51.833658    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:38:51.901993    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.902021    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:38:51.910153    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.983567    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:39:53.221259    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.239360917s)
	I0718 20:39:53.221338    4727 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0718 20:39:53.233907    4727 out.go:177] 
	W0718 20:39:53.237861    4727 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:38:50 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531478880Z" level=info msg="Starting up"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531868672Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.532448547Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=532
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.550167964Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560007672Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560035005Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560063505Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560074839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560111130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560123547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560217922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560230922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560237130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560241589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560270464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560366505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561097130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561114380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561185047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561197839Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561245172Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561280130Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563923422Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563946005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563952880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563959547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563972505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564012380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564132589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564175464Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564185714Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564191797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564197839Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564204005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564210464Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564216297Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564222297Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564228089Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564233922Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564239422Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564256255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564264589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564270589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564276339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564281380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564287547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564292755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564298214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564303922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564310047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564315047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564320255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564325630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564332547Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564341589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564346797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564352089Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564402380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564416755Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564421630Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564427380Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564432047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564437755Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564467089Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564611964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564632964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564646839Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564655005Z" level=info msg="containerd successfully booted in 0.014823s"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.553636672Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.561497047Z" level=info msg="Loading containers: start."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.589775631Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.620757631Z" level=info msg="Loading containers: done."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624562881Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624599339Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:38:51 ha-256000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641454297Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641495839Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:38:52 ha-256000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.265389656Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266153693Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266192011Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266216137Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266284865Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:53 ha-256000-m03 dockerd[931]: time="2024-07-19T03:38:53.282812481Z" level=info msg="Starting up"
	Jul 19 03:39:53 ha-256000-m03 dockerd[931]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0718 20:39:53.237915    4727 out.go:239] * 
	W0718 20:39:53.239556    4727 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 20:39:53.244752    4727 out.go:177] 
	
	
	==> Docker <==
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62c92a2e03424d74abec35244521f1b7761982d7dbb7311513fb13f822c225ed/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f20cc01dd922b82b1ee5c6472024624755b1340ebceab21cf25c6eacf6e19c4/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5db9ae745b118ebe428663f3f1c8c679cdc1a26cea72ee6016f951ae34fc28ea/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858940540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858976718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858984229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.859018904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.861914444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.861992224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.862003156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.862051518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889214398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889287171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889293388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889346507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061800448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061853702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061875454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061930291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:39:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a81719e2049682e90e011b40424dd53e2ae913d00000287c821ac163206c9b20/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 03:39:56 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:39:56Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404399110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404453937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404462477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404689325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf6fa4236c452       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   a81719e204968       busybox-fc5497c4f-5922h
	6dfd469e7d36e       ba04bb24b9575                                                                                         15 minutes ago      Running             storage-provisioner       0                   5db9ae745b118       storage-provisioner
	1097379f4f6cb       2437cf7621777                                                                                         15 minutes ago      Running             coredns                   0                   62c92a2e03424       coredns-7db6d8ff4d-gl7wn
	9a1c088f8966e       2437cf7621777                                                                                         15 minutes ago      Running             coredns                   0                   5f20cc01dd922       coredns-7db6d8ff4d-t5fk7
	74fc7ee221313       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              15 minutes ago      Running             kindnet-cni               0                   f7fb0ae46c979       kindnet-znvgn
	9103cd3e30ac5       2351f570ed0ea                                                                                         15 minutes ago      Running             kube-proxy                0                   dd4c5c6f3ce08       kube-proxy-jxnv9
	8128016ed9c34       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   e405a8655e904       kube-vip-ha-256000
	d5ff116ccff16       014faa467e297                                                                                         15 minutes ago      Running             etcd                      0                   1dd441769aa2a       etcd-ha-256000
	29f96bba40d3a       d48f992a22722                                                                                         15 minutes ago      Running             kube-scheduler            0                   aa59c4a58dba5       kube-scheduler-ha-256000
	70ffd55232c0b       8e97cdb19e7cc                                                                                         15 minutes ago      Running             kube-controller-manager   0                   96446dab38e98       kube-controller-manager-ha-256000
	dff4e67b66806       61773190d42ff                                                                                         15 minutes ago      Running             kube-apiserver            0                   877c87b7df476       kube-apiserver-ha-256000
	
	
	==> coredns [1097379f4f6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37765 - 42644 "HINFO IN 3312804127670044151.9315725327003923. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.009474143s
	[INFO] 10.244.0.4:33989 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.044131336s
	[INFO] 10.244.0.4:49979 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001205888s
	[INFO] 10.244.1.2:54862 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000064045s
	[INFO] 10.244.0.4:54057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097379s
	[INFO] 10.244.0.4:39996 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065545s
	[INFO] 10.244.0.4:39732 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063878s
	[INFO] 10.244.1.2:57277 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070961s
	[INFO] 10.244.1.2:44544 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00059536s
	[INFO] 10.244.1.2:33879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000042043s
	[INFO] 10.244.1.2:41170 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039002s
	[INFO] 10.244.0.4:32818 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000023751s
	[INFO] 10.244.0.4:44658 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000027251s
	[INFO] 10.244.1.2:36566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093796s
	[INFO] 10.244.1.2:41685 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000035752s
	[INFO] 10.244.1.2:36603 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000019667s
	[INFO] 10.244.0.4:51415 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000060336s
	[INFO] 10.244.0.4:50758 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000047377s
	[INFO] 10.244.1.2:56872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077712s
	[INFO] 10.244.1.2:34308 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000047752s
	[INFO] 10.244.1.2:48345 - 5 "PTR IN 1.105.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000043752s
	
	
	==> coredns [9a1c088f8966] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42392 - 40278 "HINFO IN 2632545797447059373.9195703630793318012. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009665964s
	[INFO] 10.244.0.4:39096 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234719s
	[INFO] 10.244.0.4:39212 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010352553s
	[INFO] 10.244.1.2:39974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082254s
	[INFO] 10.244.1.2:48244 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00062732s
	[INFO] 10.244.1.2:44600 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000022126s
	[INFO] 10.244.0.4:43528 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001761788s
	[INFO] 10.244.0.4:39922 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072504s
	[INFO] 10.244.0.4:40557 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054253s
	[INFO] 10.244.0.4:36599 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000831538s
	[INFO] 10.244.0.4:35378 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072337s
	[INFO] 10.244.1.2:45376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082296s
	[INFO] 10.244.1.2:55926 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000027209s
	[INFO] 10.244.1.2:50938 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000031001s
	[INFO] 10.244.1.2:32874 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004696s
	[INFO] 10.244.0.4:39411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067337s
	[INFO] 10.244.0.4:56069 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000028543s
	[INFO] 10.244.1.2:60061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076628s
	[INFO] 10.244.0.4:57199 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087171s
	[INFO] 10.244.0.4:55865 - 5 "PTR IN 1.105.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000063753s
	[INFO] 10.244.1.2:50952 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059502s
	
	
	==> describe nodes <==
	Name:               ha-256000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_18T20_36_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:36:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:52:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:37:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    ha-256000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d710ce1e1896426084c421362e18dda0
	  System UUID:                d710ce1e1896426084c421362e18dda0
	  Boot ID:                    83486cc1-e7b0-4568-bb5a-c46474de14e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5922h              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-gl7wn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-t5fk7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-256000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-znvgn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-256000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-256000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-jxnv9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-256000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-256000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node ha-256000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node ha-256000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node ha-256000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node ha-256000 event: Registered Node ha-256000 in Controller
	  Normal  NodeReady                15m   kubelet          Node ha-256000 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node ha-256000 event: Registered Node ha-256000 in Controller
	
	
	Name:               ha-256000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_18T20_38_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:38:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:52:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ha-256000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 b10ac96f2bdf4ee3ad1f9ba82eb39a4e
	  System UUID:                b10ac96f2bdf4ee3ad1f9ba82eb39a4e
	  Boot ID:                    b548924b-9c86-4ba2-9a9e-2e5cc7830327
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bqdhb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 etcd-ha-256000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-2mvfm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-256000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-256000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-99sn4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-256000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-256000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-256000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-256000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-256000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-256000-m02 event: Registered Node ha-256000-m02 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-256000-m02 event: Registered Node ha-256000-m02 in Controller
	
	
	Name:               ha-256000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_18T20_52_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:52:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:52:25 +0000   Fri, 19 Jul 2024 03:52:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:52:25 +0000   Fri, 19 Jul 2024 03:52:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:52:25 +0000   Fri, 19 Jul 2024 03:52:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:52:25 +0000   Fri, 19 Jul 2024 03:52:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.8
	  Hostname:    ha-256000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7ef708e53a8467ea694f2dae8b4a441
	  System UUID:                f7ef708e53a8467ea694f2dae8b4a441
	  Boot ID:                    47c8adab-13e3-4772-b14f-a5c3454cbce2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hkhd4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-5jkfp              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26s
	  kube-system                 kube-proxy-2l55x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25s (x3 over 26s)  kubelet          Node ha-256000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x3 over 26s)  kubelet          Node ha-256000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x3 over 26s)  kubelet          Node ha-256000-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s                node-controller  Node ha-256000-m04 event: Registered Node ha-256000-m04 in Controller
	  Normal  RegisteredNode           21s                node-controller  Node ha-256000-m04 event: Registered Node ha-256000-m04 in Controller
	  Normal  NodeReady                4s                 kubelet          Node ha-256000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.650707] EINJ: EINJ table not found.
	[  +0.549800] systemd-fstab-generator[117]: Ignoring "noauto" option for root device
	[  +0.136927] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000360] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db
	[  +3.624626] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +0.080461] systemd-fstab-generator[508]: Ignoring "noauto" option for root device
	[  +0.034842] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.469016] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.194273] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.081032] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.086446] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +2.293076] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.088824] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.085311] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.095642] systemd-fstab-generator[1171]: Ignoring "noauto" option for root device
	[  +2.542348] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.036994] kauditd_printk_skb: 257 callbacks suppressed
	[  +2.330914] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +2.194691] systemd-fstab-generator[1695]: Ignoring "noauto" option for root device
	[  +0.779104] kauditd_printk_skb: 104 callbacks suppressed
	[  +3.727432] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[ +15.155229] kauditd_printk_skb: 62 callbacks suppressed
	[Jul19 03:37] kauditd_printk_skb: 29 callbacks suppressed
	[Jul19 03:38] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d5ff116ccff1] <==
	{"level":"info","ts":"2024-07-19T03:38:39.213772Z","caller":"traceutil/trace.go:171","msg":"trace[213955580] linearizableReadLoop","detail":"{readStateIndex:773; appliedIndex:773; }","duration":"854.090297ms","start":"2024-07-19T03:38:38.359661Z","end":"2024-07-19T03:38:39.213752Z","steps":["trace[213955580] 'read index received'  (duration: 854.085672ms)","trace[213955580] 'applied index is now lower than readState.Index'  (duration: 1.458µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T03:38:39.214653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"854.964275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.105.5\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-19T03:38:39.214668Z","caller":"traceutil/trace.go:171","msg":"trace[64905690] range","detail":"{range_begin:/registry/masterleases/192.168.105.5; range_end:; response_count:1; response_revision:726; }","duration":"855.016063ms","start":"2024-07-19T03:38:38.359648Z","end":"2024-07-19T03:38:39.214664Z","steps":["trace[64905690] 'agreement among raft nodes before linearized reading'  (duration: 854.846409ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.214698Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.359622Z","time spent":"855.063476ms","remote":"127.0.0.1:50924","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":156,"request content":"key:\"/registry/masterleases/192.168.105.5\" "}
	{"level":"warn","ts":"2024-07-19T03:38:39.217551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.784693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:38:39.217629Z","caller":"traceutil/trace.go:171","msg":"trace[485073674] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:726; }","duration":"181.858104ms","start":"2024-07-19T03:38:39.035755Z","end":"2024-07-19T03:38:39.217613Z","steps":["trace[485073674] 'agreement among raft nodes before linearized reading'  (duration: 181.775735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.218131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.961025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-07-19T03:38:39.218206Z","caller":"traceutil/trace.go:171","msg":"trace[1437088211] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:726; }","duration":"362.976608ms","start":"2024-07-19T03:38:38.855164Z","end":"2024-07-19T03:38:39.218141Z","steps":["trace[1437088211] 'agreement among raft nodes before linearized reading'  (duration: 362.940194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.218228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.855138Z","time spent":"363.085141ms","remote":"127.0.0.1:51114","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":457,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" "}
	{"level":"warn","ts":"2024-07-19T03:38:39.219731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.350481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:38:39.21976Z","caller":"traceutil/trace.go:171","msg":"trace[1532987535] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:726; }","duration":"513.381938ms","start":"2024-07-19T03:38:38.706374Z","end":"2024-07-19T03:38:39.219756Z","steps":["trace[1532987535] 'agreement among raft nodes before linearized reading'  (duration: 509.325689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.219771Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.706284Z","time spent":"513.484013ms","remote":"127.0.0.1:50868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-19T03:46:36.540686Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1175}
	{"level":"info","ts":"2024-07-19T03:46:36.562489Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1175,"took":"20.474469ms","hash":3930648337,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1482752,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-19T03:46:36.562693Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3930648337,"revision":1175,"compact-revision":-1}
	{"level":"info","ts":"2024-07-19T03:51:36.54679Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1806}
	{"level":"info","ts":"2024-07-19T03:51:36.56014Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1806,"took":"13.081219ms","hash":2540466080,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1347584,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2024-07-19T03:51:36.560169Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2540466080,"revision":1806,"compact-revision":1175}
	{"level":"info","ts":"2024-07-19T03:51:51.257001Z","caller":"traceutil/trace.go:171","msg":"trace[1986908692] transaction","detail":"{read_only:false; response_revision:2468; number_of_response:1; }","duration":"402.156149ms","start":"2024-07-19T03:51:50.85483Z","end":"2024-07-19T03:51:51.256986Z","steps":["trace[1986908692] 'process raft request'  (duration: 402.092938ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:51:51.263132Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:51:50.85482Z","time spent":"402.257571ms","remote":"127.0.0.1:51114","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":420,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:2466 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:370 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"warn","ts":"2024-07-19T03:51:51.768488Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":7133861002988234895,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-07-19T03:51:52.115346Z","caller":"traceutil/trace.go:171","msg":"trace[670184300] linearizableReadLoop","detail":"{readStateIndex:2864; appliedIndex:2864; }","duration":"847.429387ms","start":"2024-07-19T03:51:51.267715Z","end":"2024-07-19T03:51:52.115145Z","steps":["trace[670184300] 'read index received'  (duration: 847.427304ms)","trace[670184300] 'applied index is now lower than readState.Index'  (duration: 1.583µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T03:51:52.115442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"847.720859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-07-19T03:51:52.115453Z","caller":"traceutil/trace.go:171","msg":"trace[1715467008] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2468; }","duration":"847.742402ms","start":"2024-07-19T03:51:51.267707Z","end":"2024-07-19T03:51:52.115449Z","steps":["trace[1715467008] 'agreement among raft nodes before linearized reading'  (duration: 847.669315ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:51:52.115464Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:51:51.267688Z","time spent":"847.77332ms","remote":"127.0.0.1:51016","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1133,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	
	
	==> kernel <==
	 03:52:29 up 16 min,  0 users,  load average: 0.23, 0.15, 0.10
	Linux ha-256000 5.10.207 #1 SMP PREEMPT Thu Jul 18 19:24:21 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [74fc7ee22131] <==
	I0719 03:51:49.218410       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:51:49.218412       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:51:59.214635       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:51:59.214703       1 main.go:303] handling current node
	I0719 03:51:59.214720       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:51:59.214730       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:52:09.209289       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:52:09.209308       1 main.go:303] handling current node
	I0719 03:52:09.209318       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:52:09.209320       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:52:09.209402       1 main.go:299] Handling node with IPs: map[192.168.105.8:{}]
	I0719 03:52:09.209409       1 main.go:326] Node ha-256000-m04 has CIDR [10.244.2.0/24] 
	I0719 03:52:09.209443       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.105.8 Flags: [] Table: 0} 
	I0719 03:52:19.212284       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:52:19.212307       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:52:19.212436       1 main.go:299] Handling node with IPs: map[192.168.105.8:{}]
	I0719 03:52:19.212444       1 main.go:326] Node ha-256000-m04 has CIDR [10.244.2.0/24] 
	I0719 03:52:19.212465       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:52:19.212488       1 main.go:303] handling current node
	I0719 03:52:29.209525       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:52:29.209550       1 main.go:303] handling current node
	I0719 03:52:29.209559       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:52:29.209562       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:52:29.209639       1 main.go:299] Handling node with IPs: map[192.168.105.8:{}]
	I0719 03:52:29.209646       1 main.go:326] Node ha-256000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [dff4e67b6680] <==
	W0719 03:36:38.357891       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0719 03:36:38.358258       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 03:36:38.359450       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 03:36:39.162576       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 03:36:39.259455       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 03:36:39.263308       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 03:36:39.266876       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 03:36:53.692820       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 03:36:53.723447       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0719 03:38:39.230077       1 trace.go:236] Trace[99535700]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.105.5,type:*v1.Endpoints,resource:apiServerIPInfo (19-Jul-2024 03:38:38.359) (total time: 870ms):
	Trace[99535700]: ---"initial value restored" 856ms (03:38:39.216)
	Trace[99535700]: [870.770259ms] [870.770259ms] END
	E0719 03:51:35.729254       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50022: use of closed network connection
	E0719 03:51:35.841233       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50024: use of closed network connection
	E0719 03:51:36.030728       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50029: use of closed network connection
	E0719 03:51:36.142429       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50031: use of closed network connection
	E0719 03:51:36.323525       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50036: use of closed network connection
	E0719 03:51:36.429306       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50038: use of closed network connection
	E0719 03:51:37.668910       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50053: use of closed network connection
	E0719 03:51:37.774366       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50055: use of closed network connection
	E0719 03:51:37.880279       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50057: use of closed network connection
	E0719 03:51:37.986190       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50059: use of closed network connection
	I0719 03:51:52.115940       1 trace.go:236] Trace[1625868550]: "Get" accept:application/json, */*,audit-id:4eb328c7-12ab-428c-8442-ad69a0af68f3,client:192.168.105.5,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/arm64) kubernetes/$Format,verb:GET (19-Jul-2024 03:51:51.267) (total time: 848ms):
	Trace[1625868550]: ---"About to write a response" 848ms (03:51:52.115)
	Trace[1625868550]: [848.499978ms] [848.499978ms] END
	
	
	==> kube-controller-manager [70ffd55232c0] <==
	I0719 03:37:23.294186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.391µs"
	I0719 03:37:23.772649       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0719 03:38:01.950412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-256000-m02\" does not exist"
	I0719 03:38:01.956739       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-256000-m02" podCIDRs=["10.244.1.0/24"]
	I0719 03:38:03.779798       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-256000-m02"
	I0719 03:39:54.715082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.549011ms"
	I0719 03:39:54.728524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.544471ms"
	I0719 03:39:54.760521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.962639ms"
	I0719 03:39:54.798120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.556155ms"
	I0719 03:39:54.810232       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.068766ms"
	I0719 03:39:54.810338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.794µs"
	I0719 03:39:56.791240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.855498ms"
	I0719 03:39:56.791390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.29µs"
	I0719 03:39:57.235525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.740732ms"
	I0719 03:39:57.236806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.25502ms"
	I0719 03:52:03.930437       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-256000-m04\" does not exist"
	I0719 03:52:03.936556       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-256000-m04" podCIDRs=["10.244.2.0/24"]
	I0719 03:52:04.000931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.293µs"
	I0719 03:52:08.902831       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-256000-m04"
	I0719 03:52:24.841255       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-256000-m04"
	I0719 03:52:24.852341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.751µs"
	I0719 03:52:24.857696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.876µs"
	I0719 03:52:24.862188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.168µs"
	I0719 03:52:26.804188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.587843ms"
	I0719 03:52:26.804338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.251µs"
	
	
	==> kube-proxy [9103cd3e30ac] <==
	I0719 03:36:54.228395       1 server_linux.go:69] "Using iptables proxy"
	I0719 03:36:54.235224       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.5"]
	I0719 03:36:54.286000       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 03:36:54.286028       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 03:36:54.286039       1 server_linux.go:165] "Using iptables Proxier"
	I0719 03:36:54.287034       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 03:36:54.287396       1 server.go:872] "Version info" version="v1.30.3"
	I0719 03:36:54.287403       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:36:54.288184       1 config.go:192] "Starting service config controller"
	I0719 03:36:54.288259       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 03:36:54.288280       1 config.go:319] "Starting node config controller"
	I0719 03:36:54.288282       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 03:36:54.289304       1 config.go:101] "Starting endpoint slice config controller"
	I0719 03:36:54.289308       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 03:36:54.388688       1 shared_informer.go:320] Caches are synced for node config
	I0719 03:36:54.388711       1 shared_informer.go:320] Caches are synced for service config
	I0719 03:36:54.389972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [29f96bba40d3] <==
	W0719 03:36:38.043369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 03:36:38.043491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 03:36:38.078796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:38.078841       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.135286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:38.135302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.143595       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 03:36:38.143607       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 03:36:40.612937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 03:39:54.727744       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5922h\": pod busybox-fc5497c4f-5922h is already assigned to node \"ha-256000\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-5922h" node="ha-256000"
	E0719 03:39:54.727817       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1bb5b7eb-c669-43f7-ac3f-753596620b94(default/busybox-fc5497c4f-5922h) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-5922h"
	E0719 03:39:54.727832       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5922h\": pod busybox-fc5497c4f-5922h is already assigned to node \"ha-256000\"" pod="default/busybox-fc5497c4f-5922h"
	I0719 03:39:54.727844       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-5922h" node="ha-256000"
	E0719 03:52:03.953546       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2l55x\": pod kube-proxy-2l55x is already assigned to node \"ha-256000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2l55x" node="ha-256000-m04"
	E0719 03:52:03.955782       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod be2735f1-1760-45b5-87ff-6b6f4b5b8ac7(kube-system/kube-proxy-2l55x) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2l55x"
	E0719 03:52:03.955820       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2l55x\": pod kube-proxy-2l55x is already assigned to node \"ha-256000-m04\"" pod="kube-system/kube-proxy-2l55x"
	I0719 03:52:03.955838       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2l55x" node="ha-256000-m04"
	E0719 03:52:03.954010       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5jkfp\": pod kindnet-5jkfp is already assigned to node \"ha-256000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5jkfp" node="ha-256000-m04"
	E0719 03:52:03.957289       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c177fda4-9d9e-4d84-84af-339aedfeb9b0(kube-system/kindnet-5jkfp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5jkfp"
	E0719 03:52:03.957300       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5jkfp\": pod kindnet-5jkfp is already assigned to node \"ha-256000-m04\"" pod="kube-system/kindnet-5jkfp"
	I0719 03:52:03.957375       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5jkfp" node="ha-256000-m04"
	E0719 03:52:24.851934       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-hkhd4\": pod busybox-fc5497c4f-hkhd4 is already assigned to node \"ha-256000-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-hkhd4" node="ha-256000-m04"
	E0719 03:52:24.851965       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b5e17355-2549-46bd-a210-89247efbd5dd(default/busybox-fc5497c4f-hkhd4) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-hkhd4"
	E0719 03:52:24.851975       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-hkhd4\": pod busybox-fc5497c4f-hkhd4 is already assigned to node \"ha-256000-m04\"" pod="default/busybox-fc5497c4f-hkhd4"
	I0719 03:52:24.851984       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-hkhd4" node="ha-256000-m04"
	
	
	==> kubelet <==
	Jul 19 03:47:39 ha-256000 kubelet[2215]: E0719 03:47:39.079617    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:47:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:48:39 ha-256000 kubelet[2215]: E0719 03:48:39.080370    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:48:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:49:39 ha-256000 kubelet[2215]: E0719 03:49:39.079647    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:49:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:50:39 ha-256000 kubelet[2215]: E0719 03:50:39.079658    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:50:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:51:39 ha-256000 kubelet[2215]: E0719 03:51:39.080297    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:51:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:51:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:51:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:51:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ha-256000 -n ha-256000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-256000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (51.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-256000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-256000 status --output json -v=7 --alsologtostderr: exit status 2 (209.422834ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-256000","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-256000-m02","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-256000-m03","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false},{"Name":"ha-256000-m04","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:52:31.661070    5208 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:52:31.661416    5208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:52:31.661423    5208 out.go:304] Setting ErrFile to fd 2...
	I0718 20:52:31.661425    5208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:52:31.661581    5208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:52:31.661715    5208 out.go:298] Setting JSON to true
	I0718 20:52:31.661733    5208 mustload.go:65] Loading cluster: ha-256000
	I0718 20:52:31.661795    5208 notify.go:220] Checking for updates...
	I0718 20:52:31.661958    5208 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:52:31.661966    5208 status.go:255] checking status of ha-256000 ...
	I0718 20:52:31.662987    5208 status.go:330] ha-256000 host status = "Running" (err=<nil>)
	I0718 20:52:31.662996    5208 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:52:31.663092    5208 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:52:31.663207    5208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:52:31.663217    5208 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:52:31.693464    5208 ssh_runner.go:195] Run: systemctl --version
	I0718 20:52:31.695863    5208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:52:31.701743    5208 kubeconfig.go:125] found "ha-256000" server: "https://192.168.105.254:8443"
	I0718 20:52:31.701764    5208 api_server.go:166] Checking apiserver status ...
	I0718 20:52:31.701788    5208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:52:31.707518    5208 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1991/cgroup
	W0718 20:52:31.711111    5208 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1991/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0718 20:52:31.711144    5208 ssh_runner.go:195] Run: ls
	I0718 20:52:31.712811    5208 api_server.go:253] Checking apiserver healthz at https://192.168.105.254:8443/healthz ...
	I0718 20:52:31.716465    5208 api_server.go:279] https://192.168.105.254:8443/healthz returned 200:
	ok
	I0718 20:52:31.716476    5208 status.go:422] ha-256000 apiserver status = Running (err=<nil>)
	I0718 20:52:31.716481    5208 status.go:257] ha-256000 status: &{Name:ha-256000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:52:31.716492    5208 status.go:255] checking status of ha-256000-m02 ...
	I0718 20:52:31.717064    5208 status.go:330] ha-256000-m02 host status = "Running" (err=<nil>)
	I0718 20:52:31.717071    5208 host.go:66] Checking if "ha-256000-m02" exists ...
	I0718 20:52:31.717159    5208 host.go:66] Checking if "ha-256000-m02" exists ...
	I0718 20:52:31.717266    5208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:52:31.717272    5208 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:52:31.745470    5208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:52:31.751637    5208 kubeconfig.go:125] found "ha-256000" server: "https://192.168.105.254:8443"
	I0718 20:52:31.751648    5208 api_server.go:166] Checking apiserver status ...
	I0718 20:52:31.751672    5208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:52:31.756737    5208 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1957/cgroup
	W0718 20:52:31.760720    5208 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1957/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0718 20:52:31.760745    5208 ssh_runner.go:195] Run: ls
	I0718 20:52:31.762304    5208 api_server.go:253] Checking apiserver healthz at https://192.168.105.254:8443/healthz ...
	I0718 20:52:31.764814    5208 api_server.go:279] https://192.168.105.254:8443/healthz returned 200:
	ok
	I0718 20:52:31.764824    5208 status.go:422] ha-256000-m02 apiserver status = Running (err=<nil>)
	I0718 20:52:31.764828    5208 status.go:257] ha-256000-m02 status: &{Name:ha-256000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:52:31.764835    5208 status.go:255] checking status of ha-256000-m03 ...
	I0718 20:52:31.765456    5208 status.go:330] ha-256000-m03 host status = "Running" (err=<nil>)
	I0718 20:52:31.765464    5208 host.go:66] Checking if "ha-256000-m03" exists ...
	I0718 20:52:31.765574    5208 host.go:66] Checking if "ha-256000-m03" exists ...
	I0718 20:52:31.765688    5208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:52:31.765694    5208 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:52:31.791744    5208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:52:31.798110    5208 kubeconfig.go:125] found "ha-256000" server: "https://192.168.105.254:8443"
	I0718 20:52:31.798120    5208 api_server.go:166] Checking apiserver status ...
	I0718 20:52:31.798141    5208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0718 20:52:31.802580    5208 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0718 20:52:31.802591    5208 status.go:422] ha-256000-m03 apiserver status = Stopped (err=<nil>)
	I0718 20:52:31.802596    5208 status.go:257] ha-256000-m03 status: &{Name:ha-256000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:52:31.802602    5208 status.go:255] checking status of ha-256000-m04 ...
	I0718 20:52:31.803174    5208 status.go:330] ha-256000-m04 host status = "Running" (err=<nil>)
	I0718 20:52:31.803182    5208 host.go:66] Checking if "ha-256000-m04" exists ...
	I0718 20:52:31.803271    5208 host.go:66] Checking if "ha-256000-m04" exists ...
	I0718 20:52:31.803374    5208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:52:31.803380    5208 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m04/id_rsa Username:docker}
	I0718 20:52:31.830574    5208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:52:31.836302    5208 status.go:257] ha-256000-m04 status: &{Name:ha-256000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-256000 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ha-256000 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:50 PDT | 18 Jul 24 20:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- get pods -o          | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-5922h -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.105.1           |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:51 PDT |
	|         | busybox-fc5497c4f-bqdhb -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.105.1           |           |         |         |                     |                     |
	| kubectl | -p ha-256000 -- exec                 | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT |                     |
	|         | busybox-fc5497c4f-hkhd4              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-256000 -v=7                | ha-256000 | jenkins | v1.33.1 | 18 Jul 24 20:51 PDT | 18 Jul 24 20:52 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:36:07
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:36:07.154539    4727 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:36:07.154652    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154655    4727 out.go:304] Setting ErrFile to fd 2...
	I0718 20:36:07.154657    4727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:07.154787    4727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:36:07.155777    4727 out.go:298] Setting JSON to false
	I0718 20:36:07.172062    4727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2135,"bootTime":1721358032,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:36:07.172136    4727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:36:07.175769    4727 out.go:177] * [ha-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 20:36:07.182867    4727 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:36:07.182897    4727 notify.go:220] Checking for updates...
	I0718 20:36:07.188814    4727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:07.191895    4727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:36:07.192950    4727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:36:07.195871    4727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 20:36:07.198897    4727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:36:07.202011    4727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:36:07.205826    4727 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 20:36:07.212869    4727 start.go:297] selected driver: qemu2
	I0718 20:36:07.212875    4727 start.go:901] validating driver "qemu2" against <nil>
	I0718 20:36:07.212880    4727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:36:07.215027    4727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:36:07.217921    4727 out.go:177] * Automatically selected the socket_vmnet network
	I0718 20:36:07.220933    4727 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:36:07.220960    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:07.220968    4727 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0718 20:36:07.220971    4727 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0718 20:36:07.220995    4727 start.go:340] cluster config:
	{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:07.224405    4727 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:36:07.231878    4727 out.go:177] * Starting "ha-256000" primary control-plane node in "ha-256000" cluster
	I0718 20:36:07.235849    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:07.235880    4727 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 20:36:07.235892    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:07.235960    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:07.235965    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:07.236167    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:07.236181    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json: {Name:mk4f96c33b167a65b92bd4e48e5f1a3c7a52bbe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:07.236387    4727 start.go:360] acquireMachinesLock for ha-256000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:07.236422    4727 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "ha-256000"
	I0718 20:36:07.236432    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:07.236461    4727 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 20:36:07.243901    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:07.268930    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:07.268958    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:07.269026    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:07.269056    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269065    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269104    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:07.269127    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:07.269136    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:07.269466    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:07.395393    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:07.434010    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:07.434014    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:07.434195    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.445169    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.445186    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.445241    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2 +20000M
	I0718 20:36:07.453205    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:07.453220    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.453236    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.453239    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:07.453248    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:07.453278    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:e3:ed:16:92:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/disk.qcow2
	I0718 20:36:07.491921    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:07.491947    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:07.491951    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:07.491963    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:07.492029    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:07.492048    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:07.492054    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:07.492061    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:07.492067    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:09.494175    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:09.494254    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:09.494618    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:09.494729    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:09.494764    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:09.494789    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:09.494817    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:11.496994    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:11.497242    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:11.497663    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:11.497717    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:11.497756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:11.497787    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:11.497819    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:13.500006    4727 main.go:141] libmachine: Attempt 3
	I0718 20:36:13.500080    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:13.500185    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:13.500200    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:13.500205    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:13.500210    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:13.500216    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:15.502208    4727 main.go:141] libmachine: Attempt 4
	I0718 20:36:15.502220    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:15.502255    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:15.502275    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:15.502280    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:15.502285    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:15.502290    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:17.504286    4727 main.go:141] libmachine: Attempt 5
	I0718 20:36:17.504293    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:17.504346    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:17.504356    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:17.504360    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:17.504364    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:17.504369    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:19.506369    4727 main.go:141] libmachine: Attempt 6
	I0718 20:36:19.506395    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:19.506467    4727 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0718 20:36:19.506476    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:19.506481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:19.506485    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:19.506490    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:21.508527    4727 main.go:141] libmachine: Attempt 7
	I0718 20:36:21.508554    4727 main.go:141] libmachine: Searching for 6a:e3:ed:16:92:d5 in /var/db/dhcpd_leases ...
	I0718 20:36:21.508694    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:21.508708    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:21.508719    4727 main.go:141] libmachine: Found match: 6a:e3:ed:16:92:d5
	I0718 20:36:21.508730    4727 main.go:141] libmachine: IP: 192.168.105.5
	I0718 20:36:21.508735    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0718 20:36:22.527247    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:36:22.527480    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.527975    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.527990    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:36:22.610697    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:36:22.610726    4727 buildroot.go:166] provisioning hostname "ha-256000"
	I0718 20:36:22.610824    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.611097    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.611107    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000 && echo "ha-256000" | sudo tee /etc/hostname
	I0718 20:36:22.682492    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000
	
	I0718 20:36:22.682552    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.682702    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.682713    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:36:22.742479    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:36:22.742492    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:36:22.742500    4727 buildroot.go:174] setting up certificates
	I0718 20:36:22.742504    4727 provision.go:84] configureAuth start
	I0718 20:36:22.742508    4727 provision.go:143] copyHostCerts
	I0718 20:36:22.742542    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742586    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:36:22.742593    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:36:22.742831    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:36:22.743010    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743030    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:36:22.743033    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:36:22.743097    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:36:22.743184    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743212    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:36:22.743215    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:36:22.743275    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:36:22.743373    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000 san=[127.0.0.1 192.168.105.5 ha-256000 localhost minikube]
	I0718 20:36:22.831924    4727 provision.go:177] copyRemoteCerts
	I0718 20:36:22.831953    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:36:22.831960    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:22.861471    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:36:22.861517    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:36:22.869576    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:36:22.869616    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0718 20:36:22.877642    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:36:22.877682    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 20:36:22.885597    4727 provision.go:87] duration metric: took 143.091583ms to configureAuth
	I0718 20:36:22.885605    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:36:22.885700    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:22.885731    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.885814    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.885819    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:36:22.939257    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:36:22.939268    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:36:22.939327    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:36:22.939382    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.939495    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.939529    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:36:22.999120    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:36:22.999176    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:22.999299    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:22.999307    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:36:24.399001    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:36:24.399014    4727 machine.go:97] duration metric: took 1.871786709s to provisionDockerMachine
	I0718 20:36:24.399020    4727 client.go:171] duration metric: took 17.130530167s to LocalClient.Create
	I0718 20:36:24.399035    4727 start.go:167] duration metric: took 17.130580916s to libmachine.API.Create "ha-256000"
	I0718 20:36:24.399041    4727 start.go:293] postStartSetup for "ha-256000" (driver="qemu2")
	I0718 20:36:24.399047    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:36:24.399133    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:36:24.399144    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.429882    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:36:24.431446    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:36:24.431458    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:36:24.431559    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:36:24.431674    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:36:24.431679    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:36:24.431800    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:36:24.434949    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:24.443099    4727 start.go:296] duration metric: took 44.054208ms for postStartSetup
	I0718 20:36:24.443547    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:24.443727    4727 start.go:128] duration metric: took 17.207737166s to createHost
	I0718 20:36:24.443753    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:36:24.443841    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0718 20:36:24.443845    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:36:24.496185    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360184.183489336
	
	I0718 20:36:24.496191    4727 fix.go:216] guest clock: 1721360184.183489336
	I0718 20:36:24.496195    4727 fix.go:229] Guest: 2024-07-18 20:36:24.183489336 -0700 PDT Remote: 2024-07-18 20:36:24.44373 -0700 PDT m=+17.308254043 (delta=-260.240664ms)
	I0718 20:36:24.496206    4727 fix.go:200] guest clock delta is within tolerance: -260.240664ms
	I0718 20:36:24.496210    4727 start.go:83] releasing machines lock for "ha-256000", held for 17.260259709s
	I0718 20:36:24.496487    4727 ssh_runner.go:195] Run: cat /version.json
	I0718 20:36:24.496496    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.498161    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:36:24.498180    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:24.526501    4727 ssh_runner.go:195] Run: systemctl --version
	I0718 20:36:24.575612    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0718 20:36:24.577665    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:36:24.577696    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:36:24.584047    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:36:24.584056    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.584135    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.590860    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:36:24.594365    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:36:24.597804    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.597834    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:36:24.601501    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.605402    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:36:24.609279    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:36:24.613150    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:36:24.616783    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:36:24.620826    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:36:24.624868    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:36:24.628746    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:36:24.632406    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:36:24.635998    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:24.719937    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:36:24.727107    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:36:24.727172    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:36:24.734556    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.745145    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:36:24.752682    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:36:24.758405    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.763722    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:36:24.804424    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:36:24.810784    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:36:24.817505    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:36:24.818968    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:36:24.822004    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:36:24.827814    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:36:24.912234    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:36:24.993893    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:36:24.993951    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:36:25.000295    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:25.079893    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:27.267877    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.188026583s)
	I0718 20:36:27.267954    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:36:27.273388    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:36:27.280952    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.286424    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:36:27.376871    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:36:27.462186    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.546490    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:36:27.553023    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:36:27.558470    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:27.643444    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:36:27.668876    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:36:27.669018    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:36:27.671231    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:36:27.671271    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:36:27.672746    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:36:27.689183    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:36:27.689243    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.699313    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:36:27.710299    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:36:27.710436    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:36:27.711936    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:27.716497    4727 kubeadm.go:883] updating cluster {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0718 20:36:27.716547    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:27.716590    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:27.721193    4727 docker.go:685] Got preloaded images: 
	I0718 20:36:27.721201    4727 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0718 20:36:27.721249    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:27.725068    4727 ssh_runner.go:195] Run: which lz4
	I0718 20:36:27.726303    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0718 20:36:27.726385    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0718 20:36:27.727841    4727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0718 20:36:27.727857    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (335411903 bytes)
	I0718 20:36:29.032881    4727 docker.go:649] duration metric: took 1.306555792s to copy over tarball
	I0718 20:36:29.032945    4727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0718 20:36:30.077797    4727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.044866416s)
	I0718 20:36:30.077812    4727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0718 20:36:30.092929    4727 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 20:36:30.096929    4727 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0718 20:36:30.102897    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:30.190133    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:36:32.408215    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.218126791s)
	I0718 20:36:32.408325    4727 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 20:36:32.414564    4727 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 20:36:32.414576    4727 cache_images.go:84] Images are preloaded, skipping loading
	I0718 20:36:32.414588    4727 kubeadm.go:934] updating node { 192.168.105.5 8443 v1.30.3 docker true true} ...
	I0718 20:36:32.414662    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:36:32.414717    4727 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 20:36:32.422967    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:32.422975    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:32.422989    4727 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0718 20:36:32.423001    4727 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-256000 NodeName:ha-256000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 20:36:32.423064    4727 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-256000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 20:36:32.423074    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:36:32.423127    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:36:32.430238    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:36:32.430293    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:36:32.430329    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:36:32.433734    4727 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 20:36:32.433764    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0718 20:36:32.437628    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0718 20:36:32.443760    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:36:32.449483    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0718 20:36:32.455815    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1448 bytes)
	I0718 20:36:32.461759    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:36:32.463168    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:36:32.467182    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:36:32.556522    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:36:32.567007    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.5
	I0718 20:36:32.567019    4727 certs.go:194] generating shared ca certs ...
	I0718 20:36:32.567029    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.567195    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:36:32.567242    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:36:32.567249    4727 certs.go:256] generating profile certs ...
	I0718 20:36:32.567287    4727 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:36:32.567299    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt with IP's: []
	I0718 20:36:32.629331    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt ...
	I0718 20:36:32.629341    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt: {Name:mkc9c3e562115edef8b85e012e81a3eb4a2cf75a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key ...
	I0718 20:36:32.629649    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key: {Name:mkb41caa35d055a2dcb04d364862addacfff33bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.629781    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4
	I0718 20:36:32.629789    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.254]
	I0718 20:36:32.695617    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 ...
	I0718 20:36:32.695626    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4: {Name:mkee89910ca1db08ac083863b0e4a027ae270203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696056    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 ...
	I0718 20:36:32.696061    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4: {Name:mk8365902b4e9f071c9404629a4b35cc6ca6ebbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.696198    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:36:32.696306    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.084e6dd4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:36:32.696557    4727 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:36:32.696565    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt with IP's: []
	I0718 20:36:32.762976    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt ...
	I0718 20:36:32.762980    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt: {Name:mkb3e0281e7ef362624ad24bb17cfb244b9bc171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763112    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key ...
	I0718 20:36:32.763115    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key: {Name:mkc06a04ddb3616913d2c6f5647bad25fef6f42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:32.763224    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:36:32.763237    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:36:32.763247    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:36:32.763257    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:36:32.763268    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:36:32.763279    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:36:32.763290    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:36:32.763301    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:36:32.763382    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:36:32.763410    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:36:32.763415    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:36:32.763434    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:36:32.763451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:36:32.763468    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:36:32.763505    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:36:32.763524    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.763535    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.763546    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.763807    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:36:32.773281    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:36:32.781447    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:36:32.789770    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:36:32.798040    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0718 20:36:32.806232    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:36:32.814458    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:36:32.822522    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:36:32.830515    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:36:32.838566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:36:32.846581    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:36:32.854568    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 20:36:32.860769    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:36:32.863035    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:36:32.867352    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868859    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.868879    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:36:32.870984    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:36:32.874504    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:36:32.878096    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879659    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.879678    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:36:32.881640    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:36:32.885559    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:36:32.889461    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891114    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.891133    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:36:32.893171    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:36:32.897112    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:36:32.898621    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:36:32.898660    4727 kubeadm.go:392] StartCluster: {Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clus
terName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:32.898726    4727 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 20:36:32.903849    4727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 20:36:32.907545    4727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 20:36:32.910740    4727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 20:36:32.914021    4727 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 20:36:32.914030    4727 kubeadm.go:157] found existing configuration files:
	
	I0718 20:36:32.914050    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0718 20:36:32.917254    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 20:36:32.917277    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 20:36:32.920874    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0718 20:36:32.924549    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 20:36:32.924574    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 20:36:32.928189    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.931542    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 20:36:32.931572    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 20:36:32.934804    4727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0718 20:36:32.937825    4727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 20:36:32.937847    4727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 20:36:32.941208    4727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0718 20:36:32.964473    4727 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0718 20:36:32.964502    4727 kubeadm.go:310] [preflight] Running pre-flight checks
	I0718 20:36:33.010272    4727 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 20:36:33.010346    4727 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 20:36:33.010394    4727 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 20:36:33.080896    4727 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 20:36:33.088116    4727 out.go:204]   - Generating certificates and keys ...
	I0718 20:36:33.088149    4727 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0718 20:36:33.088180    4727 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0718 20:36:33.187618    4727 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0718 20:36:33.225765    4727 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0718 20:36:33.439485    4727 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0718 20:36:33.599214    4727 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0718 20:36:33.681357    4727 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0718 20:36:33.681418    4727 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.726840    4727 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0718 20:36:33.726901    4727 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-256000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0718 20:36:33.875169    4727 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0718 20:36:34.071575    4727 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0718 20:36:34.163748    4727 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0718 20:36:34.163778    4727 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 20:36:34.260583    4727 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 20:36:34.352375    4727 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0718 20:36:34.395125    4727 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 20:36:34.512349    4727 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 20:36:34.655223    4727 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 20:36:34.655381    4727 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 20:36:34.656483    4727 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 20:36:34.666848    4727 out.go:204]   - Booting up control plane ...
	I0718 20:36:34.666901    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 20:36:34.666950    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 20:36:34.666982    4727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 20:36:34.667031    4727 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 20:36:34.667081    4727 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 20:36:34.667103    4727 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0718 20:36:34.759306    4727 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0718 20:36:34.759350    4727 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0718 20:36:35.263383    4727 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.7975ms
	I0718 20:36:35.263624    4727 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0718 20:36:38.766721    4727 kubeadm.go:310] [api-check] The API server is healthy after 3.504642043s
	I0718 20:36:38.772139    4727 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 20:36:38.775784    4727 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 20:36:38.782114    4727 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0718 20:36:38.782191    4727 kubeadm.go:310] [mark-control-plane] Marking the node ha-256000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 20:36:38.784595    4727 kubeadm.go:310] [bootstrap-token] Using token: yv8fsh.sh51yi31jewcw15j
	I0718 20:36:38.788784    4727 out.go:204]   - Configuring RBAC rules ...
	I0718 20:36:38.788835    4727 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 20:36:38.790051    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 20:36:38.796261    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 20:36:38.797188    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 20:36:38.797986    4727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 20:36:38.798957    4727 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 20:36:39.169725    4727 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 20:36:39.576005    4727 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0718 20:36:40.169284    4727 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0718 20:36:40.169608    4727 kubeadm.go:310] 
	I0718 20:36:40.169641    4727 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0718 20:36:40.169646    4727 kubeadm.go:310] 
	I0718 20:36:40.169692    4727 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0718 20:36:40.169695    4727 kubeadm.go:310] 
	I0718 20:36:40.169709    4727 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0718 20:36:40.169760    4727 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 20:36:40.169794    4727 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 20:36:40.169797    4727 kubeadm.go:310] 
	I0718 20:36:40.169826    4727 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0718 20:36:40.169830    4727 kubeadm.go:310] 
	I0718 20:36:40.169856    4727 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 20:36:40.169858    4727 kubeadm.go:310] 
	I0718 20:36:40.169883    4727 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0718 20:36:40.169938    4727 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 20:36:40.169984    4727 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 20:36:40.169987    4727 kubeadm.go:310] 
	I0718 20:36:40.170044    4727 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0718 20:36:40.170090    4727 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0718 20:36:40.170093    4727 kubeadm.go:310] 
	I0718 20:36:40.170134    4727 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170222    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc \
	I0718 20:36:40.170234    4727 kubeadm.go:310] 	--control-plane 
	I0718 20:36:40.170242    4727 kubeadm.go:310] 
	I0718 20:36:40.170285    4727 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0718 20:36:40.170299    4727 kubeadm.go:310] 
	I0718 20:36:40.170351    4727 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yv8fsh.sh51yi31jewcw15j \
	I0718 20:36:40.170426    4727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc 
	I0718 20:36:40.170492    4727 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 20:36:40.170502    4727 cni.go:84] Creating CNI manager for ""
	I0718 20:36:40.170507    4727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 20:36:40.176555    4727 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0718 20:36:40.183616    4727 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0718 20:36:40.185686    4727 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0718 20:36:40.185696    4727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0718 20:36:40.191764    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0718 20:36:40.332259    4727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 20:36:40.332307    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.332337    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000 minikube.k8s.io/updated_at=2024_07_18T20_36_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=true
	I0718 20:36:40.385331    4727 ops.go:34] apiserver oom_adj: -16
	I0718 20:36:40.385383    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:40.887435    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.387480    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:41.887395    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.387370    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:42.885756    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.387374    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:43.886101    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.386656    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:44.887355    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.387330    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:45.887331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.386668    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:46.886398    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.385335    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:47.887237    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.387224    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:48.887271    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.387175    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:49.885647    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.387168    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:50.887214    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.387158    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:51.887129    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.387127    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:52.887088    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.387119    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:53.885301    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.387061    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 20:36:54.453749    4727 kubeadm.go:1113] duration metric: took 14.12187225s to wait for elevateKubeSystemPrivileges
	I0718 20:36:54.453766    4727 kubeadm.go:394] duration metric: took 21.55570275s to StartCluster
	I0718 20:36:54.453776    4727 settings.go:142] acquiring lock: {Name:mk9577e2a46ebc5e017130011eb528f9fea1ed10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.453868    4727 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.454239    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:36:54.454483    4727 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.454492    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:36:54.454494    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0718 20:36:54.454496    4727 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 20:36:54.454530    4727 addons.go:69] Setting storage-provisioner=true in profile "ha-256000"
	I0718 20:36:54.454533    4727 addons.go:69] Setting default-storageclass=true in profile "ha-256000"
	I0718 20:36:54.454543    4727 addons.go:234] Setting addon storage-provisioner=true in "ha-256000"
	I0718 20:36:54.454546    4727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-256000"
	I0718 20:36:54.454554    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.454722    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.455342    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:36:54.455486    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 20:36:54.455762    4727 cert_rotation.go:137] Starting client certificate rotation controller
	I0718 20:36:54.455811    4727 addons.go:234] Setting addon default-storageclass=true in "ha-256000"
	I0718 20:36:54.455823    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:36:54.460675    4727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 20:36:54.464747    4727 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.464758    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0718 20:36:54.464769    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.465436    4727 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.465440    4727 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0718 20:36:54.465444    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:36:54.511774    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0718 20:36:54.519079    4727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 20:36:54.706626    4727 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0718 20:36:54.777305    4727 round_trippers.go:463] GET https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0718 20:36:54.777314    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.777318    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.777321    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.782732    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:36:54.783013    4727 round_trippers.go:463] PUT https://192.168.105.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0718 20:36:54.783019    4727 round_trippers.go:469] Request Headers:
	I0718 20:36:54.783023    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:36:54.783026    4727 round_trippers.go:473]     Content-Type: application/json
	I0718 20:36:54.783028    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:36:54.784014    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:36:54.792272    4727 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0718 20:36:54.793579    4727 addons.go:510] duration metric: took 339.092083ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0718 20:36:54.793593    4727 start.go:246] waiting for cluster config update ...
	I0718 20:36:54.793600    4727 start.go:255] writing updated cluster config ...
	I0718 20:36:54.798143    4727 out.go:177] 
	I0718 20:36:54.802340    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:54.802369    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.805206    4727 out.go:177] * Starting "ha-256000-m02" control-plane node in "ha-256000" cluster
	I0718 20:36:54.813295    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:36:54.813304    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:36:54.813383    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:36:54.813389    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:36:54.813425    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:36:54.813828    4727 start.go:360] acquireMachinesLock for ha-256000-m02: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:36:54.813863    4727 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "ha-256000-m02"
	I0718 20:36:54.813872    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:tr
ue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:36:54.813899    4727 start.go:125] createHost starting for "m02" (driver="qemu2")
	I0718 20:36:54.818236    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:36:54.833731    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:36:54.833754    4727 client.go:168] LocalClient.Create starting
	I0718 20:36:54.833854    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:36:54.833891    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833898    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.833936    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:36:54.833959    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:36:54.833965    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:36:54.834273    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:36:54.991167    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:36:55.074302    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:36:55.074313    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:36:55.074505    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.084177    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.084198    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.084247    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2 +20000M
	I0718 20:36:55.092640    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:36:55.092655    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.092668    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.092672    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:36:55.092685    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:36:55.092723    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e8:07:38:73:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:36:55.131373    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:36:55.131397    4727 main.go:141] libmachine: STDERR: 
	I0718 20:36:55.131401    4727 main.go:141] libmachine: Attempt 0
	I0718 20:36:55.131414    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:55.131476    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:55.131491    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:55.131496    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:55.131509    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:55.131515    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:55.131521    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:57.132241    4727 main.go:141] libmachine: Attempt 1
	I0718 20:36:57.132260    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:57.132370    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:57.132380    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:57.132387    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:57.132391    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:57.132399    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:57.132403    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:36:59.134429    4727 main.go:141] libmachine: Attempt 2
	I0718 20:36:59.134514    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:36:59.134610    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:36:59.134633    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:36:59.134640    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:36:59.134645    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:36:59.134650    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:36:59.134655    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:01.136704    4727 main.go:141] libmachine: Attempt 3
	I0718 20:37:01.136730    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:01.136864    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:01.136874    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:01.136879    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:01.136892    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:01.136897    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:01.136902    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:03.139087    4727 main.go:141] libmachine: Attempt 4
	I0718 20:37:03.139131    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:03.139262    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:03.139278    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:03.139286    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:03.139290    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:03.139295    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:03.139305    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:05.141342    4727 main.go:141] libmachine: Attempt 5
	I0718 20:37:05.141371    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:05.141487    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:05.141499    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:05.141504    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:05.141508    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:05.141513    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:05.141518    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:07.141729    4727 main.go:141] libmachine: Attempt 6
	I0718 20:37:07.141760    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:07.141844    4727 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0718 20:37:07.141853    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:37:07.141858    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:37:07.141862    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:37:07.141866    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:37:07.141871    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:37:09.143893    4727 main.go:141] libmachine: Attempt 7
	I0718 20:37:09.143910    4727 main.go:141] libmachine: Searching for 5a:e8:7:38:73:30 in /var/db/dhcpd_leases ...
	I0718 20:37:09.143997    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:37:09.144009    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:37:09.144011    4727 main.go:141] libmachine: Found match: 5a:e8:7:38:73:30
	I0718 20:37:09.144020    4727 main.go:141] libmachine: IP: 192.168.105.6
	I0718 20:37:09.144023    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0718 20:37:22.173394    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:37:22.173460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.173824    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.173832    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:37:22.224366    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:37:22.224379    4727 buildroot.go:166] provisioning hostname "ha-256000-m02"
	I0718 20:37:22.224437    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.224569    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.224574    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m02 && echo "ha-256000-m02" | sudo tee /etc/hostname
	I0718 20:37:22.281136    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m02
	
	I0718 20:37:22.281193    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.281326    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.281333    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:37:22.335405    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:37:22.335420    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:37:22.335427    4727 buildroot.go:174] setting up certificates
	I0718 20:37:22.335432    4727 provision.go:84] configureAuth start
	I0718 20:37:22.335436    4727 provision.go:143] copyHostCerts
	I0718 20:37:22.335460    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335499    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:37:22.335504    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:37:22.335625    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:37:22.335755    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335793    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:37:22.335798    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:37:22.335849    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:37:22.335937    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.335958    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:37:22.335961    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:37:22.336009    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:37:22.336098    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m02 san=[127.0.0.1 192.168.105.6 ha-256000-m02 localhost minikube]
	I0718 20:37:22.416839    4727 provision.go:177] copyRemoteCerts
	I0718 20:37:22.417292    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:37:22.417307    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:22.446250    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:37:22.446323    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:37:22.455193    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:37:22.455243    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:37:22.463182    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:37:22.463217    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:37:22.471841    4727 provision.go:87] duration metric: took 136.406375ms to configureAuth
	I0718 20:37:22.471860    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:37:22.472154    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:22.472192    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.472306    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.472312    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:37:22.520570    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:37:22.520580    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:37:22.520661    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:37:22.520720    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.520835    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.520884    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:37:22.573905    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:37:22.573954    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:22.574074    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:22.574082    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:37:23.946918    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:37:23.946932    4727 machine.go:97] duration metric: took 1.773574458s to provisionDockerMachine
	I0718 20:37:23.946948    4727 client.go:171] duration metric: took 29.113993584s to LocalClient.Create
	I0718 20:37:23.946964    4727 start.go:167] duration metric: took 29.114041166s to libmachine.API.Create "ha-256000"
	I0718 20:37:23.946968    4727 start.go:293] postStartSetup for "ha-256000-m02" (driver="qemu2")
	I0718 20:37:23.946975    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:37:23.947049    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:37:23.947059    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:23.975789    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:37:23.977316    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:37:23.977325    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:37:23.977414    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:37:23.977531    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:37:23.977538    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:37:23.977667    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:37:23.981129    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:23.989836    4727 start.go:296] duration metric: took 42.86225ms for postStartSetup
	I0718 20:37:23.990279    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:37:23.990466    4727 start.go:128] duration metric: took 29.177367125s to createHost
	I0718 20:37:23.990492    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:37:23.990582    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0718 20:37:23.990587    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:37:24.039991    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360244.056265969
	
	I0718 20:37:24.040003    4727 fix.go:216] guest clock: 1721360244.056265969
	I0718 20:37:24.040011    4727 fix.go:229] Guest: 2024-07-18 20:37:24.056265969 -0700 PDT Remote: 2024-07-18 20:37:23.990469 -0700 PDT m=+76.856635126 (delta=65.796969ms)
	I0718 20:37:24.040021    4727 fix.go:200] guest clock delta is within tolerance: 65.796969ms
	I0718 20:37:24.040027    4727 start.go:83] releasing machines lock for "ha-256000-m02", held for 29.226966s
	I0718 20:37:24.045188    4727 out.go:177] * Found network options:
	I0718 20:37:24.048256    4727 out.go:177]   - NO_PROXY=192.168.105.5
	W0718 20:37:24.052331    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:37:24.052639    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:37:24.052695    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:37:24.052702    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	I0718 20:37:24.052696    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:37:24.052803    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}
	W0718 20:37:24.080701    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:37:24.080760    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:37:24.120864    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:37:24.120877    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.120944    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.128913    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:37:24.133095    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:37:24.137320    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.137368    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:37:24.141513    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.145685    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:37:24.149674    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:37:24.153524    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:37:24.157504    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:37:24.161442    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:37:24.165217    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:37:24.169715    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:37:24.173504    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:37:24.177428    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.249585    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:37:24.258814    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:37:24.258889    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:37:24.266134    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.272789    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:37:24.282701    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:37:24.287831    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.293394    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:37:24.332150    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:37:24.338444    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:37:24.344970    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:37:24.346508    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:37:24.349662    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:37:24.355683    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:37:24.439008    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:37:24.522884    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:37:24.522913    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:37:24.529269    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:24.614408    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:37:26.705797    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.091426708s)
	I0718 20:37:26.705868    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 20:37:26.711797    4727 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 20:37:26.719055    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.724747    4727 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 20:37:26.813533    4727 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 20:37:26.893596    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:26.965581    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 20:37:26.972962    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 20:37:26.978785    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:27.061213    4727 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 20:37:27.087585    4727 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 20:37:27.087659    4727 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 20:37:27.091046    4727 start.go:563] Will wait 60s for crictl version
	I0718 20:37:27.091097    4727 ssh_runner.go:195] Run: which crictl
	I0718 20:37:27.092542    4727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 20:37:27.112215    4727 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 20:37:27.112278    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.124950    4727 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 20:37:27.136592    4727 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 20:37:27.145555    4727 out.go:177]   - env NO_PROXY=192.168.105.5
	I0718 20:37:27.149713    4727 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0718 20:37:27.151201    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:27.155414    4727 mustload.go:65] Loading cluster: ha-256000
	I0718 20:37:27.155551    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:37:27.156066    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:27.156157    4727 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000 for IP: 192.168.105.6
	I0718 20:37:27.156161    4727 certs.go:194] generating shared ca certs ...
	I0718 20:37:27.156167    4727 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.156269    4727 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 20:37:27.156316    4727 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 20:37:27.156321    4727 certs.go:256] generating profile certs ...
	I0718 20:37:27.156387    4727 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key
	I0718 20:37:27.156400    4727 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9
	I0718 20:37:27.156410    4727 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.5 192.168.105.6 192.168.105.254]
	I0718 20:37:27.328161    4727 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 ...
	I0718 20:37:27.328188    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9: {Name:mkff536dfdabd0cc9a693525dd142a97006d4485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328645    4727 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 ...
	I0718 20:37:27.328655    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9: {Name:mkb963d77aed955311589ae3cd9371dca3b50bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:37:27.328816    4727 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt
	I0718 20:37:27.328945    4727 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key.dd3fbca9 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key
	I0718 20:37:27.329100    4727 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key
	I0718 20:37:27.329110    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 20:37:27.329125    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 20:37:27.329137    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 20:37:27.329150    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 20:37:27.329162    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 20:37:27.329176    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 20:37:27.329186    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 20:37:27.329197    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 20:37:27.329271    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 20:37:27.329299    4727 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 20:37:27.329305    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 20:37:27.329347    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 20:37:27.329372    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 20:37:27.329396    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 20:37:27.329451    4727 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:37:27.329478    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.329491    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.329501    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem -> /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.329519    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:27.355925    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0718 20:37:27.357647    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0718 20:37:27.362088    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0718 20:37:27.363733    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0718 20:37:27.367759    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0718 20:37:27.369261    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0718 20:37:27.373839    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0718 20:37:27.375475    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0718 20:37:27.379174    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0718 20:37:27.380628    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0718 20:37:27.384809    4727 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0718 20:37:27.386562    4727 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0718 20:37:27.390606    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 20:37:27.399865    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 20:37:27.408308    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 20:37:27.416747    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 20:37:27.425050    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0718 20:37:27.433244    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 20:37:27.441306    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 20:37:27.449446    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0718 20:37:27.457566    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 20:37:27.465676    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 20:37:27.473743    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 20:37:27.482174    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0718 20:37:27.487947    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0718 20:37:27.493902    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0718 20:37:27.499712    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0718 20:37:27.505265    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0718 20:37:27.511047    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0718 20:37:27.517340    4727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0718 20:37:27.523229    4727 ssh_runner.go:195] Run: openssl version
	I0718 20:37:27.525438    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 20:37:27.529080    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530597    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.530617    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 20:37:27.532775    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 20:37:27.536483    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 20:37:27.540031    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541631    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.541649    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 20:37:27.543631    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 20:37:27.547571    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 20:37:27.551419    4727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553057    4727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.553079    4727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 20:37:27.555162    4727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 20:37:27.559227    4727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 20:37:27.560725    4727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0718 20:37:27.560754    4727 kubeadm.go:934] updating node {m02 192.168.105.6 8443 v1.30.3 docker true true} ...
	I0718 20:37:27.560799    4727 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-256000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 20:37:27.560814    4727 kube-vip.go:115] generating kube-vip config ...
	I0718 20:37:27.560837    4727 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0718 20:37:27.572539    4727 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0718 20:37:27.572577    4727 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.105.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0718 20:37:27.572623    4727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.576082    4727 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0718 20:37:27.576121    4727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm
	I0718 20:37:27.579785    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl
	I0718 20:37:27.579780    4727 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet
	I0718 20:37:34.561853    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.561928    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0718 20:37:34.564073    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0718 20:37:34.564095    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (49938584 bytes)
	I0718 20:37:35.510887    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.510952    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0718 20:37:35.512864    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0718 20:37:35.512884    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (48955544 bytes)
	I0718 20:37:42.606961    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:37:42.613080    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.613168    4727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0718 20:37:42.614817    4727 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0718 20:37:42.614833    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/linux/arm64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (96467384 bytes)
	I0718 20:37:43.119287    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0718 20:37:43.122637    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0718 20:37:43.128732    4727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 20:37:43.134516    4727 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1442 bytes)
	I0718 20:37:43.141275    4727 ssh_runner.go:195] Run: grep 192.168.105.254	control-plane.minikube.internal$ /etc/hosts
	I0718 20:37:43.142606    4727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 20:37:43.146857    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:37:43.230113    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:37:43.243145    4727 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:37:43.243333    4727 start.go:317] joinCluster: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluste
rName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:37:43.243382    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0718 20:37:43.243391    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	I0718 20:37:43.371073    4727 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:37:43.371092    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443"
	I0718 20:38:03.232381    4727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8ur534.0hjhqar78ehuh131 --discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-256000-m02 --control-plane --apiserver-advertise-address=192.168.105.6 --apiserver-bind-port=8443": (19.861822375s)
	I0718 20:38:03.232396    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0718 20:38:03.485331    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-256000-m02 minikube.k8s.io/updated_at=2024_07_18T20_38_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-256000 minikube.k8s.io/primary=false
	I0718 20:38:03.530961    4727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-256000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0718 20:38:03.578648    4727 start.go:319] duration metric: took 20.3358655s to joinCluster
	I0718 20:38:03.578688    4727 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:03.578898    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:03.583884    4727 out.go:177] * Verifying Kubernetes components...
	I0718 20:38:03.590972    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:03.702999    4727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 20:38:03.709797    4727 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:38:03.709929    4727 kapi.go:59] client config for ha-256000: &rest.Config{Host:"https://192.168.105.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1023b3790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0718 20:38:03.709957    4727 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.105.254:8443 with https://192.168.105.5:8443
	I0718 20:38:03.710058    4727 node_ready.go:35] waiting up to 6m0s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:03.710093    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:03.710097    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:03.710101    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:03.710109    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:03.716299    4727 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0718 20:38:04.212157    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.212175    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.212180    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.212182    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.217870    4727 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 20:38:04.711681    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:04.711692    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:04.711696    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:04.711698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:04.713463    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.212138    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.212149    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.212153    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.212156    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.214175    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:05.711331    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:05.711345    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:05.711360    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:05.711363    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:05.712682    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:05.713155    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:06.210250    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.210264    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.210268    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.210271    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.212254    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:06.711235    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:06.711255    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:06.711260    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:06.711262    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:06.712940    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.212089    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.212100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.212104    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.212106    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.214317    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:07.712070    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:07.712079    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:07.712083    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:07.712086    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:07.713825    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:07.714102    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:08.211862    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.211878    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.211883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.211885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.213993    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:08.712062    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:08.712075    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:08.712079    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:08.712081    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:08.713753    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.212027    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.212036    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.212052    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.212055    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.213833    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:09.712020    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:09.712029    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:09.712033    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:09.712035    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:09.713439    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.212016    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.212025    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.212029    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.212031    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.213662    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:10.213924    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:10.711085    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:10.711100    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:10.711114    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:10.711117    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:10.712848    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.211980    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.211995    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.211999    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.212002    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.213760    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:11.711981    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:11.711994    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:11.712005    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:11.712008    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:11.713435    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.211955    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.211969    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.211974    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.211976    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.213759    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:12.214202    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:12.711912    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:12.711929    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:12.711933    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:12.711935    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:12.713382    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.211920    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.211932    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.211941    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.211943    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.213828    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:13.711194    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:13.711206    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:13.711209    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:13.711211    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:13.712757    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:14.211901    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.211919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.211924    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.211932    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.213956    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:14.214285    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:14.711860    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:14.711876    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:14.711883    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:14.711885    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:14.713170    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.211895    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.211907    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.211911    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.211913    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.213693    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:15.711835    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:15.711849    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:15.711863    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:15.711865    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:15.713487    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.211839    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.211844    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.211846    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.213365    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.711659    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:16.711669    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:16.711673    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:16.711675    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:16.713252    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:16.713433    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:17.211818    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.211830    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.211834    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.211836    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.213413    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:17.711756    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:17.711781    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:17.711785    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:17.711788    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:17.713341    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.211779    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.211794    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.211798    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.211800    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.213551    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.711749    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:18.711759    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:18.711764    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:18.711766    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:18.713325    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:18.713645    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:19.211738    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.211750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.211754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.211756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.213507    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:19.711717    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:19.711731    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:19.711734    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:19.711736    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:19.713476    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.211230    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.211271    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.211314    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.211318    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.212922    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:20.710773    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:20.710783    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:20.710787    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:20.710790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:20.712163    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.211705    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.211717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.211738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.211742    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.213362    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:21.213898    4727 node_ready.go:53] node "ha-256000-m02" has status "Ready":"False"
	I0718 20:38:21.711683    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:21.711698    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:21.711702    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:21.711704    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:21.713411    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.211928    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.211938    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.211942    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.211944    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.214292    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.214473    4727 node_ready.go:49] node "ha-256000-m02" has status "Ready":"True"
	I0718 20:38:22.214479    4727 node_ready.go:38] duration metric: took 18.50492425s for node "ha-256000-m02" to be "Ready" ...
	I0718 20:38:22.214483    4727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:22.214513    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:22.214523    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.214528    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.214533    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.216823    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.221656    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.221688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gl7wn
	I0718 20:38:22.221691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.221695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.221698    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.223037    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.223438    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.223443    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.223447    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.223449    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.224627    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.224906    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.224912    4727 pod_ready.go:81] duration metric: took 3.247917ms for pod "coredns-7db6d8ff4d-gl7wn" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224916    4727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.224935    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t5fk7
	I0718 20:38:22.224937    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.224950    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.224954    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.226106    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.226400    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.226404    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.226411    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.226414    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.227526    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.227886    4727 pod_ready.go:92] pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.227891    4727 pod_ready.go:81] duration metric: took 2.972458ms for pod "coredns-7db6d8ff4d-t5fk7" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227894    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.227913    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000
	I0718 20:38:22.227919    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.227923    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.227925    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.228991    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.229395    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.229399    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.229402    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.229406    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.230465    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.230693    4727 pod_ready.go:92] pod "etcd-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.230699    4727 pod_ready.go:81] duration metric: took 2.801916ms for pod "etcd-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230703    4727 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.230720    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256000-m02
	I0718 20:38:22.230723    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.230726    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.230728    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.231834    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.232263    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:22.232268    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.232271    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.232273    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.233360    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.233783    4727 pod_ready.go:92] pod "etcd-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.233789    4727 pod_ready.go:81] duration metric: took 3.083416ms for pod "etcd-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.233794    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.413762    4727 request.go:629] Waited for 179.941666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413824    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000
	I0718 20:38:22.413828    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.413841    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.413846    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.415462    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:22.613785    4727 request.go:629] Waited for 197.877917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613838    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:22.613844    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.613847    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.613849    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.616581    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:22.616806    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:22.616814    4727 pod_ready.go:81] duration metric: took 383.02725ms for pod "kube-apiserver-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.616819    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:22.813743    4727 request.go:629] Waited for 196.894708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813781    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256000-m02
	I0718 20:38:22.813784    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:22.813788    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:22.813790    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:22.815511    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.012375    4727 request.go:629] Waited for 196.496584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012418    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.012422    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.012426    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.012428    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.014100    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.014297    4727 pod_ready.go:92] pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.014304    4727 pod_ready.go:81] duration metric: took 397.4915ms for pod "kube-apiserver-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.014308    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.213728    4727 request.go:629] Waited for 199.392916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213764    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000
	I0718 20:38:23.213767    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.213771    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.213774    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.215292    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.412016    4727 request.go:629] Waited for 196.230667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412048    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:23.412050    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.412055    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.412057    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.414117    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.414317    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.414324    4727 pod_ready.go:81] duration metric: took 400.022917ms for pod "kube-controller-manager-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.414329    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.613726    4727 request.go:629] Waited for 199.367083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613754    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256000-m02
	I0718 20:38:23.613757    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.613760    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.613763    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.615829    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:23.813718    4727 request.go:629] Waited for 197.566667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813747    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:23.813750    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:23.813754    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:23.813756    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:23.815391    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:23.815670    4727 pod_ready.go:92] pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:23.815679    4727 pod_ready.go:81] duration metric: took 401.357791ms for pod "kube-controller-manager-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:23.815685    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.013744    4727 request.go:629] Waited for 198.028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013777    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-99sn4
	I0718 20:38:24.013780    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.013783    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.013785    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.015358    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.213717    4727 request.go:629] Waited for 197.87625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213750    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:24.213772    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.213776    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.213779    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.215177    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.215486    4727 pod_ready.go:92] pod "kube-proxy-99sn4" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.215494    4727 pod_ready.go:81] duration metric: took 399.816291ms for pod "kube-proxy-99sn4" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.215499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.412543    4727 request.go:629] Waited for 197.022333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412572    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jxnv9
	I0718 20:38:24.412576    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.412580    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.412582    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.414200    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:24.613688    4727 request.go:629] Waited for 199.188292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613723    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:24.613734    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.613738    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.613740    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.616115    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:24.616487    4727 pod_ready.go:92] pod "kube-proxy-jxnv9" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:24.616495    4727 pod_ready.go:81] duration metric: took 401.003958ms for pod "kube-proxy-jxnv9" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.616499    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:24.811999    4727 request.go:629] Waited for 195.4745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812037    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000
	I0718 20:38:24.812040    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:24.812044    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:24.812046    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:24.813599    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.013712    4727 request.go:629] Waited for 199.880375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013743    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000
	I0718 20:38:25.013746    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.013750    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.013752    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.015408    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.015677    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.015685    4727 pod_ready.go:81] duration metric: took 399.1935ms for pod "kube-scheduler-ha-256000" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.015689    4727 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.213690    4727 request.go:629] Waited for 197.964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213729    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256000-m02
	I0718 20:38:25.213735    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.213739    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.213741    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.215582    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.413674    4727 request.go:629] Waited for 197.841584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413700    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes/ha-256000-m02
	I0718 20:38:25.413702    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.413714    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.413717    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.415433    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.415627    4727 pod_ready.go:92] pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace has status "Ready":"True"
	I0718 20:38:25.415633    4727 pod_ready.go:81] duration metric: took 399.951542ms for pod "kube-scheduler-ha-256000-m02" in "kube-system" namespace to be "Ready" ...
	I0718 20:38:25.415638    4727 pod_ready.go:38] duration metric: took 3.201238458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 20:38:25.415647    4727 api_server.go:52] waiting for apiserver process to appear ...
	I0718 20:38:25.415719    4727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:38:25.421413    4727 api_server.go:72] duration metric: took 21.843316333s to wait for apiserver process to appear ...
	I0718 20:38:25.421422    4727 api_server.go:88] waiting for apiserver healthz status ...
	I0718 20:38:25.421429    4727 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0718 20:38:25.424174    4727 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0718 20:38:25.424198    4727 round_trippers.go:463] GET https://192.168.105.5:8443/version
	I0718 20:38:25.424200    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.424204    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.424207    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.424682    4727 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 20:38:25.424723    4727 api_server.go:141] control plane version: v1.30.3
	I0718 20:38:25.424729    4727 api_server.go:131] duration metric: took 3.305084ms to wait for apiserver health ...
	I0718 20:38:25.424732    4727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0718 20:38:25.613673    4727 request.go:629] Waited for 188.916583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613714    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:25.613717    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.613721    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.613723    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.616608    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:25.620463    4727 system_pods.go:59] 17 kube-system pods found
	I0718 20:38:25.620472    4727 system_pods.go:61] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:25.620475    4727 system_pods.go:61] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:25.620477    4727 system_pods.go:61] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:25.620479    4727 system_pods.go:61] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:25.620480    4727 system_pods.go:61] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:25.620482    4727 system_pods.go:61] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:25.620484    4727 system_pods.go:61] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:25.620486    4727 system_pods.go:61] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:25.620488    4727 system_pods.go:61] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:25.620490    4727 system_pods.go:61] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:25.620492    4727 system_pods.go:61] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:25.620493    4727 system_pods.go:61] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:25.620495    4727 system_pods.go:61] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:25.620497    4727 system_pods.go:61] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:25.620498    4727 system_pods.go:61] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:25.620500    4727 system_pods.go:61] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:25.620502    4727 system_pods.go:61] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:25.620505    4727 system_pods.go:74] duration metric: took 195.775375ms to wait for pod list to return data ...
	I0718 20:38:25.620509    4727 default_sa.go:34] waiting for default service account to be created ...
	I0718 20:38:25.813683    4727 request.go:629] Waited for 193.137584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813709    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/default/serviceaccounts
	I0718 20:38:25.813712    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:25.813716    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:25.813721    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:25.815354    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:25.815466    4727 default_sa.go:45] found service account: "default"
	I0718 20:38:25.815474    4727 default_sa.go:55] duration metric: took 194.966875ms for default service account to be created ...
	I0718 20:38:25.815479    4727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0718 20:38:26.013652    4727 request.go:629] Waited for 198.147166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013688    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/namespaces/kube-system/pods
	I0718 20:38:26.013691    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.013695    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.013702    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.016448    4727 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 20:38:26.020596    4727 system_pods.go:86] 17 kube-system pods found
	I0718 20:38:26.020604    4727 system_pods.go:89] "coredns-7db6d8ff4d-gl7wn" [06887cbc-e34e-460e-bc61-28fd45550399] Running
	I0718 20:38:26.020607    4727 system_pods.go:89] "coredns-7db6d8ff4d-t5fk7" [3a3f41b1-8454-4c68-aed4-7956c9f880eb] Running
	I0718 20:38:26.020609    4727 system_pods.go:89] "etcd-ha-256000" [3c5c8a3d-60c8-47d6-90b5-e9c22e92d740] Running
	I0718 20:38:26.020611    4727 system_pods.go:89] "etcd-ha-256000-m02" [e2a1f77b-c82e-4d18-b0be-36dbc65192e7] Running
	I0718 20:38:26.020613    4727 system_pods.go:89] "kindnet-2mvfm" [97ffd74f-2ac4-43a0-a3fe-42da57fb4df6] Running
	I0718 20:38:26.020615    4727 system_pods.go:89] "kindnet-znvgn" [158e5dce-7dd1-47b9-a96d-1ba0292a834d] Running
	I0718 20:38:26.020617    4727 system_pods.go:89] "kube-apiserver-ha-256000" [b97e236c-6f98-489f-90c5-4d939f9d9600] Running
	I0718 20:38:26.020619    4727 system_pods.go:89] "kube-apiserver-ha-256000-m02" [132a5728-8ae5-46ae-adc8-c56465f805fe] Running
	I0718 20:38:26.020621    4727 system_pods.go:89] "kube-controller-manager-ha-256000" [adb3d5b6-3f1a-46da-9f15-bf717397caf4] Running
	I0718 20:38:26.020622    4727 system_pods.go:89] "kube-controller-manager-ha-256000-m02" [9c753482-1b49-4bcf-b20e-a7cedcdf116b] Running
	I0718 20:38:26.020624    4727 system_pods.go:89] "kube-proxy-99sn4" [3ac61dcf-274a-4c21-baf8-284b9790b4db] Running
	I0718 20:38:26.020626    4727 system_pods.go:89] "kube-proxy-jxnv9" [ccf2c8ef-e889-40fd-b3d5-81336370a6a5] Running
	I0718 20:38:26.020628    4727 system_pods.go:89] "kube-scheduler-ha-256000" [0d6d4c02-087d-42cc-ab2e-d39e2a1d503b] Running
	I0718 20:38:26.020629    4727 system_pods.go:89] "kube-scheduler-ha-256000-m02" [cd53b85a-8176-46ef-a893-80d2fdc3d849] Running
	I0718 20:38:26.020631    4727 system_pods.go:89] "kube-vip-ha-256000" [f815fb21-c317-479f-84d1-72be4590a68f] Running
	I0718 20:38:26.020633    4727 system_pods.go:89] "kube-vip-ha-256000-m02" [2b4410fe-39c3-4c75-8624-f3eeee50a3e9] Running
	I0718 20:38:26.020635    4727 system_pods.go:89] "storage-provisioner" [3a11238c-96dd-4d66-8983-8cdcacaa8e46] Running
	I0718 20:38:26.020641    4727 system_pods.go:126] duration metric: took 205.165291ms to wait for k8s-apps to be running ...
	I0718 20:38:26.020645    4727 system_svc.go:44] waiting for kubelet service to be running ....
	I0718 20:38:26.020720    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:38:26.027026    4727 system_svc.go:56] duration metric: took 6.37875ms WaitForService to wait for kubelet
	I0718 20:38:26.027036    4727 kubeadm.go:582] duration metric: took 22.448955791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 20:38:26.027047    4727 node_conditions.go:102] verifying NodePressure condition ...
	I0718 20:38:26.213670    4727 request.go:629] Waited for 186.592667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213748    4727 round_trippers.go:463] GET https://192.168.105.5:8443/api/v1/nodes
	I0718 20:38:26.213751    4727 round_trippers.go:469] Request Headers:
	I0718 20:38:26.213756    4727 round_trippers.go:473]     Accept: application/json, */*
	I0718 20:38:26.213758    4727 round_trippers.go:473]     User-Agent: minikube-darwin-arm64/v0.0.0 (darwin/arm64) kubernetes/$Format
	I0718 20:38:26.215369    4727 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 20:38:26.215702    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215710    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215716    4727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 20:38:26.215719    4727 node_conditions.go:123] node cpu capacity is 2
	I0718 20:38:26.215721    4727 node_conditions.go:105] duration metric: took 188.677125ms to run NodePressure ...
	I0718 20:38:26.215733    4727 start.go:241] waiting for startup goroutines ...
	I0718 20:38:26.215747    4727 start.go:255] writing updated cluster config ...
	I0718 20:38:26.221138    4727 out.go:177] 
	I0718 20:38:26.225195    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:26.225251    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.230070    4727 out.go:177] * Starting "ha-256000-m03" control-plane node in "ha-256000" cluster
	I0718 20:38:26.238085    4727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:38:26.238092    4727 cache.go:56] Caching tarball of preloaded images
	I0718 20:38:26.238177    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:38:26.238184    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:38:26.238226    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:26.238529    4727 start.go:360] acquireMachinesLock for ha-256000-m03: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:38:26.238563    4727 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "ha-256000-m03"
	I0718 20:38:26.238573    4727 start.go:93] Provisioning new machine with config: &{Name:ha-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-256000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 20:38:26.238613    4727 start.go:125] createHost starting for "m03" (driver="qemu2")
	I0718 20:38:26.243026    4727 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 20:38:26.268172    4727 start.go:159] libmachine.API.Create for "ha-256000" (driver="qemu2")
	I0718 20:38:26.268206    4727 client.go:168] LocalClient.Create starting
	I0718 20:38:26.268290    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 20:38:26.268328    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268338    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268376    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 20:38:26.268399    4727 main.go:141] libmachine: Decoding PEM data...
	I0718 20:38:26.268406    4727 main.go:141] libmachine: Parsing certificate...
	I0718 20:38:26.268691    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 20:38:26.426584    4727 main.go:141] libmachine: Creating SSH key...
	I0718 20:38:26.572781    4727 main.go:141] libmachine: Creating Disk image...
	I0718 20:38:26.572789    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 20:38:26.573022    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.588299    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.588321    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.588408    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2 +20000M
	I0718 20:38:26.597072    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 20:38:26.597089    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.597102    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.597113    4727 main.go:141] libmachine: Starting QEMU VM...
	I0718 20:38:26.597129    4727 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:38:26.597163    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:7f:0e:0c:6d:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/disk.qcow2
	I0718 20:38:26.641473    4727 main.go:141] libmachine: STDOUT: 
	I0718 20:38:26.641500    4727 main.go:141] libmachine: STDERR: 
	I0718 20:38:26.641504    4727 main.go:141] libmachine: Attempt 0
	I0718 20:38:26.641520    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:26.641735    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:26.641749    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:26.641756    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:26.641761    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:26.641765    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:26.641770    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:26.641776    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:28.643878    4727 main.go:141] libmachine: Attempt 1
	I0718 20:38:28.643913    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:28.644011    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:28.644023    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:28.644028    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:28.644032    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:28.644036    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:28.644046    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:28.644052    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:30.646081    4727 main.go:141] libmachine: Attempt 2
	I0718 20:38:30.646120    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:30.646235    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:30.646244    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:30.646250    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:30.646254    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:30.646258    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:30.646262    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:30.646267    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:32.648349    4727 main.go:141] libmachine: Attempt 3
	I0718 20:38:32.648374    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:32.648466    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:32.648477    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:32.648481    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:32.648486    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:32.648497    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:32.648501    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:32.648514    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:34.650548    4727 main.go:141] libmachine: Attempt 4
	I0718 20:38:34.650566    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:34.650664    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:34.650674    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:34.650678    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:34.650682    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:34.650686    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:34.650692    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:34.650696    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:36.652758    4727 main.go:141] libmachine: Attempt 5
	I0718 20:38:36.652796    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:36.652971    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:36.652995    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:36.653008    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:36.653088    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:36.653108    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:36.653113    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:36.653119    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:38.654089    4727 main.go:141] libmachine: Attempt 6
	I0718 20:38:38.654205    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:38.654304    4727 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0718 20:38:38.654315    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:e8:7:38:73:30 ID:1,5a:e8:7:38:73:30 Lease:0x669b30e3}
	I0718 20:38:38.654320    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:e3:ed:16:92:d5 ID:1,6a:e3:ed:16:92:d5 Lease:0x669b30b3}
	I0718 20:38:38.654329    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ce:d3:2d:ae:a2:ce ID:1,ce:d3:2d:ae:a2:ce Lease:0x669b2ff6}
	I0718 20:38:38.654333    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:96:7c:6f:6a:d ID:1,6a:96:7c:6f:6a:d Lease:0x6699de34}
	I0718 20:38:38.654338    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:32:52:7f:d5:20:b9 ID:1,32:52:7f:d5:20:b9 Lease:0x6699ddff}
	I0718 20:38:38.654343    4727 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x669b2f80}
	I0718 20:38:40.656398    4727 main.go:141] libmachine: Attempt 7
	I0718 20:38:40.656425    4727 main.go:141] libmachine: Searching for d2:7f:e:c:6d:ba in /var/db/dhcpd_leases ...
	I0718 20:38:40.656535    4727 main.go:141] libmachine: Found 7 entries in /var/db/dhcpd_leases!
	I0718 20:38:40.656552    4727 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:d2:7f:e:c:6d:ba ID:1,d2:7f:e:c:6d:ba Lease:0x669b313f}
	I0718 20:38:40.656554    4727 main.go:141] libmachine: Found match: d2:7f:e:c:6d:ba
	I0718 20:38:40.656561    4727 main.go:141] libmachine: IP: 192.168.105.7
	I0718 20:38:40.656567    4727 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.7)...
	I0718 20:38:49.679874    4727 machine.go:94] provisionDockerMachine start ...
	I0718 20:38:49.680098    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.680386    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.680393    4727 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 20:38:49.720341    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 20:38:49.720352    4727 buildroot.go:166] provisioning hostname "ha-256000-m03"
	I0718 20:38:49.720396    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.720501    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.720507    4727 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256000-m03 && echo "ha-256000-m03" | sudo tee /etc/hostname
	I0718 20:38:49.765619    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256000-m03
	
	I0718 20:38:49.765691    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.765821    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.765830    4727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 20:38:49.809445    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 20:38:49.809457    4727 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 20:38:49.809463    4727 buildroot.go:174] setting up certificates
	I0718 20:38:49.809467    4727 provision.go:84] configureAuth start
	I0718 20:38:49.809471    4727 provision.go:143] copyHostCerts
	I0718 20:38:49.809497    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809560    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 20:38:49.809567    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 20:38:49.809680    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 20:38:49.810515    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810551    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 20:38:49.810554    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 20:38:49.810618    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 20:38:49.810856    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810884    4727 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 20:38:49.810888    4727 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 20:38:49.810942    4727 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 20:38:49.811128    4727 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.ha-256000-m03 san=[127.0.0.1 192.168.105.7 ha-256000-m03 localhost minikube]
	I0718 20:38:49.892392    4727 provision.go:177] copyRemoteCerts
	I0718 20:38:49.892426    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 20:38:49.892435    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:49.917004    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 20:38:49.917069    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0718 20:38:49.925760    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 20:38:49.925809    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 20:38:49.934495    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 20:38:49.934547    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 20:38:49.944465    4727 provision.go:87] duration metric: took 134.994083ms to configureAuth
	I0718 20:38:49.944477    4727 buildroot.go:189] setting minikube options for container-runtime
	I0718 20:38:49.946418    4727 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:38:49.946460    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.946554    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.946559    4727 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 20:38:49.988863    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 20:38:49.988874    4727 buildroot.go:70] root file system type: tmpfs
	I0718 20:38:49.988957    4727 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 20:38:49.989005    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:49.989117    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:49.989151    4727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.105.5"
	Environment="NO_PROXY=192.168.105.5,192.168.105.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 20:38:50.033434    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.105.5
	Environment=NO_PROXY=192.168.105.5,192.168.105.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 20:38:50.033494    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:50.033609    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:50.033618    4727 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 20:38:51.357934    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 20:38:51.357948    4727 machine.go:97] duration metric: took 1.678110291s to provisionDockerMachine
	I0718 20:38:51.357955    4727 client.go:171] duration metric: took 25.090436s to LocalClient.Create
	I0718 20:38:51.357970    4727 start.go:167] duration metric: took 25.090492834s to libmachine.API.Create "ha-256000"
	I0718 20:38:51.357987    4727 start.go:293] postStartSetup for "ha-256000-m03" (driver="qemu2")
	I0718 20:38:51.357993    4727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 20:38:51.358064    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 20:38:51.358075    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.383362    4727 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 20:38:51.385220    4727 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 20:38:51.385229    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 20:38:51.385339    4727 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 20:38:51.385460    4727 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 20:38:51.385466    4727 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> /etc/ssl/certs/17122.pem
	I0718 20:38:51.385589    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 20:38:51.389076    4727 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 20:38:51.397667    4727 start.go:296] duration metric: took 39.676333ms for postStartSetup
	I0718 20:38:51.398148    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:38:51.398353    4727 start.go:128] duration metric: took 25.1604295s to createHost
	I0718 20:38:51.398381    4727 main.go:141] libmachine: Using SSH client type: native
	I0718 20:38:51.398475    4727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10101ea10] 0x101021270 <nil>  [] 0s} 192.168.105.7 22 <nil> <nil>}
	I0718 20:38:51.398479    4727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 20:38:51.443684    4727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360331.726119547
	
	I0718 20:38:51.443697    4727 fix.go:216] guest clock: 1721360331.726119547
	I0718 20:38:51.443701    4727 fix.go:229] Guest: 2024-07-18 20:38:51.726119547 -0700 PDT Remote: 2024-07-18 20:38:51.39836 -0700 PDT m=+164.266937085 (delta=327.759547ms)
	I0718 20:38:51.443713    4727 fix.go:200] guest clock delta is within tolerance: 327.759547ms
	I0718 20:38:51.443716    4727 start.go:83] releasing machines lock for "ha-256000-m03", held for 25.205843709s
	I0718 20:38:51.447883    4727 out.go:177] * Found network options:
	I0718 20:38:51.451892    4727 out.go:177]   - NO_PROXY=192.168.105.5,192.168.105.6
	W0718 20:38:51.455815    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.455829    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456208    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	W0718 20:38:51.456223    4727 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 20:38:51.456298    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	I0718 20:38:51.456287    4727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 20:38:51.456327    4727 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	W0718 20:38:51.479804    4727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 20:38:51.479862    4727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 20:38:51.524774    4727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 20:38:51.524786    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.524847    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.531855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 20:38:51.535855    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 20:38:51.539545    4727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.539580    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 20:38:51.543520    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.547437    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 20:38:51.551284    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 20:38:51.555870    4727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 20:38:51.559926    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 20:38:51.563772    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 20:38:51.567972    4727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 20:38:51.572324    4727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 20:38:51.576791    4727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 20:38:51.580307    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.641726    4727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 20:38:51.654538    4727 start.go:495] detecting cgroup driver to use...
	I0718 20:38:51.654606    4727 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 20:38:51.661500    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.671940    4727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 20:38:51.683005    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 20:38:51.689286    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.694846    4727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 20:38:51.739658    4727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 20:38:51.745604    4727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 20:38:51.752465    4727 ssh_runner.go:195] Run: which cri-dockerd
	I0718 20:38:51.754039    4727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 20:38:51.757754    4727 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 20:38:51.764400    4727 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 20:38:51.833658    4727 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 20:38:51.901993    4727 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 20:38:51.902021    4727 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 20:38:51.910153    4727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 20:38:51.983567    4727 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 20:39:53.221259    4727 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.239360917s)
	I0718 20:39:53.221338    4727 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0718 20:39:53.233907    4727 out.go:177] 
	W0718 20:39:53.237861    4727 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:38:50 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531478880Z" level=info msg="Starting up"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.531868672Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:38:50 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:50.532448547Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=532
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.550167964Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560007672Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560035005Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560063505Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560074839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560111130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560123547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560217922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560230922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560237130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560241589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560270464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.560366505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561097130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561114380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561185047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561197839Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561245172Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.561280130Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563923422Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563946005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563952880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563959547Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.563972505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564012380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564132589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564175464Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564185714Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564191797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564197839Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564204005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564210464Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564216297Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564222297Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564228089Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564233922Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564239422Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564256255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564264589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564270589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564276339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564281380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564287547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564292755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564298214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564303922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564310047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564315047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564320255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564325630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564332547Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564341589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564346797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564352089Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564402380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564416755Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564421630Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564427380Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564432047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564437755Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564467089Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564611964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564632964Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564646839Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:38:50 ha-256000-m03 dockerd[532]: time="2024-07-19T03:38:50.564655005Z" level=info msg="containerd successfully booted in 0.014823s"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.553636672Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.561497047Z" level=info msg="Loading containers: start."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.589775631Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.620757631Z" level=info msg="Loading containers: done."
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624562881Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.624599339Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:38:51 ha-256000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641454297Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:38:51 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:51.641495839Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:38:52 ha-256000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.265389656Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266153693Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266192011Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266216137Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:38:52 ha-256000-m03 dockerd[525]: time="2024-07-19T03:38:52.266284865Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:38:53 ha-256000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:38:53 ha-256000-m03 dockerd[931]: time="2024-07-19T03:38:53.282812481Z" level=info msg="Starting up"
	Jul 19 03:39:53 ha-256000-m03 dockerd[931]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:39:53 ha-256000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0718 20:39:53.237915    4727 out.go:239] * 
	W0718 20:39:53.239556    4727 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 20:39:53.244752    4727 out.go:177] 
	
	
	==> Docker <==
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62c92a2e03424d74abec35244521f1b7761982d7dbb7311513fb13f822c225ed/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f20cc01dd922b82b1ee5c6472024624755b1340ebceab21cf25c6eacf6e19c4/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5db9ae745b118ebe428663f3f1c8c679cdc1a26cea72ee6016f951ae34fc28ea/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858940540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858976718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.858984229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.859018904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.861914444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.861992224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.862003156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.862051518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889214398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889287171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889293388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:37:22 ha-256000 dockerd[1289]: time="2024-07-19T03:37:22.889346507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061800448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061853702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061875454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 dockerd[1289]: time="2024-07-19T03:39:55.061930291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:55 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:39:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a81719e2049682e90e011b40424dd53e2ae913d00000287c821ac163206c9b20/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 03:39:56 ha-256000 cri-dockerd[1179]: time="2024-07-19T03:39:56Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404399110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404453937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404462477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:39:56 ha-256000 dockerd[1289]: time="2024-07-19T03:39:56.404689325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf6fa4236c452       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   a81719e204968       busybox-fc5497c4f-5922h
	6dfd469e7d36e       ba04bb24b9575                                                                                         15 minutes ago      Running             storage-provisioner       0                   5db9ae745b118       storage-provisioner
	1097379f4f6cb       2437cf7621777                                                                                         15 minutes ago      Running             coredns                   0                   62c92a2e03424       coredns-7db6d8ff4d-gl7wn
	9a1c088f8966e       2437cf7621777                                                                                         15 minutes ago      Running             coredns                   0                   5f20cc01dd922       coredns-7db6d8ff4d-t5fk7
	74fc7ee221313       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              15 minutes ago      Running             kindnet-cni               0                   f7fb0ae46c979       kindnet-znvgn
	9103cd3e30ac5       2351f570ed0ea                                                                                         15 minutes ago      Running             kube-proxy                0                   dd4c5c6f3ce08       kube-proxy-jxnv9
	8128016ed9c34       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   e405a8655e904       kube-vip-ha-256000
	d5ff116ccff16       014faa467e297                                                                                         15 minutes ago      Running             etcd                      0                   1dd441769aa2a       etcd-ha-256000
	29f96bba40d3a       d48f992a22722                                                                                         15 minutes ago      Running             kube-scheduler            0                   aa59c4a58dba5       kube-scheduler-ha-256000
	70ffd55232c0b       8e97cdb19e7cc                                                                                         15 minutes ago      Running             kube-controller-manager   0                   96446dab38e98       kube-controller-manager-ha-256000
	dff4e67b66806       61773190d42ff                                                                                         15 minutes ago      Running             kube-apiserver            0                   877c87b7df476       kube-apiserver-ha-256000
	
	
	==> coredns [1097379f4f6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37765 - 42644 "HINFO IN 3312804127670044151.9315725327003923. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.009474143s
	[INFO] 10.244.0.4:33989 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.044131336s
	[INFO] 10.244.0.4:49979 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001205888s
	[INFO] 10.244.1.2:54862 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000064045s
	[INFO] 10.244.0.4:54057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097379s
	[INFO] 10.244.0.4:39996 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065545s
	[INFO] 10.244.0.4:39732 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063878s
	[INFO] 10.244.1.2:57277 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070961s
	[INFO] 10.244.1.2:44544 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00059536s
	[INFO] 10.244.1.2:33879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000042043s
	[INFO] 10.244.1.2:41170 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039002s
	[INFO] 10.244.0.4:32818 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000023751s
	[INFO] 10.244.0.4:44658 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000027251s
	[INFO] 10.244.1.2:36566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093796s
	[INFO] 10.244.1.2:41685 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000035752s
	[INFO] 10.244.1.2:36603 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000019667s
	[INFO] 10.244.0.4:51415 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000060336s
	[INFO] 10.244.0.4:50758 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000047377s
	[INFO] 10.244.1.2:56872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077712s
	[INFO] 10.244.1.2:34308 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000047752s
	[INFO] 10.244.1.2:48345 - 5 "PTR IN 1.105.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000043752s
	
	
	==> coredns [9a1c088f8966] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42392 - 40278 "HINFO IN 2632545797447059373.9195703630793318012. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009665964s
	[INFO] 10.244.0.4:39096 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234719s
	[INFO] 10.244.0.4:39212 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010352553s
	[INFO] 10.244.1.2:39974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082254s
	[INFO] 10.244.1.2:48244 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00062732s
	[INFO] 10.244.1.2:44600 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000022126s
	[INFO] 10.244.0.4:43528 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001761788s
	[INFO] 10.244.0.4:39922 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072504s
	[INFO] 10.244.0.4:40557 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054253s
	[INFO] 10.244.0.4:36599 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000831538s
	[INFO] 10.244.0.4:35378 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072337s
	[INFO] 10.244.1.2:45376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082296s
	[INFO] 10.244.1.2:55926 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000027209s
	[INFO] 10.244.1.2:50938 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000031001s
	[INFO] 10.244.1.2:32874 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004696s
	[INFO] 10.244.0.4:39411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067337s
	[INFO] 10.244.0.4:56069 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000028543s
	[INFO] 10.244.1.2:60061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076628s
	[INFO] 10.244.0.4:57199 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087171s
	[INFO] 10.244.0.4:55865 - 5 "PTR IN 1.105.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000063753s
	[INFO] 10.244.1.2:50952 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059502s
	
	
	==> describe nodes <==
	Name:               ha-256000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_18T20_36_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:36:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:52:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:36:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:50:27 +0000   Fri, 19 Jul 2024 03:37:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    ha-256000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d710ce1e1896426084c421362e18dda0
	  System UUID:                d710ce1e1896426084c421362e18dda0
	  Boot ID:                    83486cc1-e7b0-4568-bb5a-c46474de14e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5922h              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-gl7wn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-t5fk7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-256000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-znvgn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-256000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-256000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-jxnv9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-256000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-256000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node ha-256000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node ha-256000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node ha-256000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node ha-256000 event: Registered Node ha-256000 in Controller
	  Normal  NodeReady                15m   kubelet          Node ha-256000 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node ha-256000 event: Registered Node ha-256000 in Controller
	
	
	Name:               ha-256000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_18T20_38_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:38:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:52:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:50:16 +0000   Fri, 19 Jul 2024 03:38:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ha-256000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 b10ac96f2bdf4ee3ad1f9ba82eb39a4e
	  System UUID:                b10ac96f2bdf4ee3ad1f9ba82eb39a4e
	  Boot ID:                    b548924b-9c86-4ba2-9a9e-2e5cc7830327
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bqdhb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 etcd-ha-256000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-2mvfm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-256000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-256000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-99sn4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-256000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-256000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-256000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-256000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-256000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-256000-m02 event: Registered Node ha-256000-m02 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-256000-m02 event: Registered Node ha-256000-m02 in Controller
	
	
	Name:               ha-256000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-256000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_18T20_52_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:52:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:52:25 +0000   Fri, 19 Jul 2024 03:52:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:52:25 +0000   Fri, 19 Jul 2024 03:52:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:52:25 +0000   Fri, 19 Jul 2024 03:52:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:52:25 +0000   Fri, 19 Jul 2024 03:52:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.8
	  Hostname:    ha-256000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2147448Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7ef708e53a8467ea694f2dae8b4a441
	  System UUID:                f7ef708e53a8467ea694f2dae8b4a441
	  Boot ID:                    47c8adab-13e3-4772-b14f-a5c3454cbce2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hkhd4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-5jkfp              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      29s
	  kube-system                 kube-proxy-2l55x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28s (x3 over 29s)  kubelet          Node ha-256000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x3 over 29s)  kubelet          Node ha-256000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x3 over 29s)  kubelet          Node ha-256000-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node ha-256000-m04 event: Registered Node ha-256000-m04 in Controller
	  Normal  RegisteredNode           24s                node-controller  Node ha-256000-m04 event: Registered Node ha-256000-m04 in Controller
	  Normal  NodeReady                7s                 kubelet          Node ha-256000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.650707] EINJ: EINJ table not found.
	[  +0.549800] systemd-fstab-generator[117]: Ignoring "noauto" option for root device
	[  +0.136927] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000360] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db
	[  +3.624626] systemd-fstab-generator[496]: Ignoring "noauto" option for root device
	[  +0.080461] systemd-fstab-generator[508]: Ignoring "noauto" option for root device
	[  +0.034842] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.469016] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.194273] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.081032] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.086446] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +2.293076] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.088824] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.085311] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.095642] systemd-fstab-generator[1171]: Ignoring "noauto" option for root device
	[  +2.542348] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.036994] kauditd_printk_skb: 257 callbacks suppressed
	[  +2.330914] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +2.194691] systemd-fstab-generator[1695]: Ignoring "noauto" option for root device
	[  +0.779104] kauditd_printk_skb: 104 callbacks suppressed
	[  +3.727432] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[ +15.155229] kauditd_printk_skb: 62 callbacks suppressed
	[Jul19 03:37] kauditd_printk_skb: 29 callbacks suppressed
	[Jul19 03:38] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d5ff116ccff1] <==
	{"level":"info","ts":"2024-07-19T03:38:39.213772Z","caller":"traceutil/trace.go:171","msg":"trace[213955580] linearizableReadLoop","detail":"{readStateIndex:773; appliedIndex:773; }","duration":"854.090297ms","start":"2024-07-19T03:38:38.359661Z","end":"2024-07-19T03:38:39.213752Z","steps":["trace[213955580] 'read index received'  (duration: 854.085672ms)","trace[213955580] 'applied index is now lower than readState.Index'  (duration: 1.458µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T03:38:39.214653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"854.964275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.105.5\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-19T03:38:39.214668Z","caller":"traceutil/trace.go:171","msg":"trace[64905690] range","detail":"{range_begin:/registry/masterleases/192.168.105.5; range_end:; response_count:1; response_revision:726; }","duration":"855.016063ms","start":"2024-07-19T03:38:38.359648Z","end":"2024-07-19T03:38:39.214664Z","steps":["trace[64905690] 'agreement among raft nodes before linearized reading'  (duration: 854.846409ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.214698Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.359622Z","time spent":"855.063476ms","remote":"127.0.0.1:50924","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":156,"request content":"key:\"/registry/masterleases/192.168.105.5\" "}
	{"level":"warn","ts":"2024-07-19T03:38:39.217551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.784693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:38:39.217629Z","caller":"traceutil/trace.go:171","msg":"trace[485073674] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:726; }","duration":"181.858104ms","start":"2024-07-19T03:38:39.035755Z","end":"2024-07-19T03:38:39.217613Z","steps":["trace[485073674] 'agreement among raft nodes before linearized reading'  (duration: 181.775735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.218131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.961025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-07-19T03:38:39.218206Z","caller":"traceutil/trace.go:171","msg":"trace[1437088211] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:726; }","duration":"362.976608ms","start":"2024-07-19T03:38:38.855164Z","end":"2024-07-19T03:38:39.218141Z","steps":["trace[1437088211] 'agreement among raft nodes before linearized reading'  (duration: 362.940194ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.218228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.855138Z","time spent":"363.085141ms","remote":"127.0.0.1:51114","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":457,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" "}
	{"level":"warn","ts":"2024-07-19T03:38:39.219731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.350481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:38:39.21976Z","caller":"traceutil/trace.go:171","msg":"trace[1532987535] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:726; }","duration":"513.381938ms","start":"2024-07-19T03:38:38.706374Z","end":"2024-07-19T03:38:39.219756Z","steps":["trace[1532987535] 'agreement among raft nodes before linearized reading'  (duration: 509.325689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:38:39.219771Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:38:38.706284Z","time spent":"513.484013ms","remote":"127.0.0.1:50868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-19T03:46:36.540686Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1175}
	{"level":"info","ts":"2024-07-19T03:46:36.562489Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1175,"took":"20.474469ms","hash":3930648337,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1482752,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-19T03:46:36.562693Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3930648337,"revision":1175,"compact-revision":-1}
	{"level":"info","ts":"2024-07-19T03:51:36.54679Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1806}
	{"level":"info","ts":"2024-07-19T03:51:36.56014Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1806,"took":"13.081219ms","hash":2540466080,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1347584,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2024-07-19T03:51:36.560169Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2540466080,"revision":1806,"compact-revision":1175}
	{"level":"info","ts":"2024-07-19T03:51:51.257001Z","caller":"traceutil/trace.go:171","msg":"trace[1986908692] transaction","detail":"{read_only:false; response_revision:2468; number_of_response:1; }","duration":"402.156149ms","start":"2024-07-19T03:51:50.85483Z","end":"2024-07-19T03:51:51.256986Z","steps":["trace[1986908692] 'process raft request'  (duration: 402.092938ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:51:51.263132Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:51:50.85482Z","time spent":"402.257571ms","remote":"127.0.0.1:51114","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":420,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:2466 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:370 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"warn","ts":"2024-07-19T03:51:51.768488Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":7133861002988234895,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-07-19T03:51:52.115346Z","caller":"traceutil/trace.go:171","msg":"trace[670184300] linearizableReadLoop","detail":"{readStateIndex:2864; appliedIndex:2864; }","duration":"847.429387ms","start":"2024-07-19T03:51:51.267715Z","end":"2024-07-19T03:51:52.115145Z","steps":["trace[670184300] 'read index received'  (duration: 847.427304ms)","trace[670184300] 'applied index is now lower than readState.Index'  (duration: 1.583µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T03:51:52.115442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"847.720859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-07-19T03:51:52.115453Z","caller":"traceutil/trace.go:171","msg":"trace[1715467008] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2468; }","duration":"847.742402ms","start":"2024-07-19T03:51:51.267707Z","end":"2024-07-19T03:51:52.115449Z","steps":["trace[1715467008] 'agreement among raft nodes before linearized reading'  (duration: 847.669315ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:51:52.115464Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:51:51.267688Z","time spent":"847.77332ms","remote":"127.0.0.1:51016","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1133,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	
	
	==> kernel <==
	 03:52:32 up 16 min,  0 users,  load average: 0.23, 0.15, 0.10
	Linux ha-256000 5.10.207 #1 SMP PREEMPT Thu Jul 18 19:24:21 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [74fc7ee22131] <==
	I0719 03:51:49.218410       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:51:49.218412       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:51:59.214635       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:51:59.214703       1 main.go:303] handling current node
	I0719 03:51:59.214720       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:51:59.214730       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:52:09.209289       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:52:09.209308       1 main.go:303] handling current node
	I0719 03:52:09.209318       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:52:09.209320       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:52:09.209402       1 main.go:299] Handling node with IPs: map[192.168.105.8:{}]
	I0719 03:52:09.209409       1 main.go:326] Node ha-256000-m04 has CIDR [10.244.2.0/24] 
	I0719 03:52:09.209443       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.105.8 Flags: [] Table: 0} 
	I0719 03:52:19.212284       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:52:19.212307       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:52:19.212436       1 main.go:299] Handling node with IPs: map[192.168.105.8:{}]
	I0719 03:52:19.212444       1 main.go:326] Node ha-256000-m04 has CIDR [10.244.2.0/24] 
	I0719 03:52:19.212465       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:52:19.212488       1 main.go:303] handling current node
	I0719 03:52:29.209525       1 main.go:299] Handling node with IPs: map[192.168.105.5:{}]
	I0719 03:52:29.209550       1 main.go:303] handling current node
	I0719 03:52:29.209559       1 main.go:299] Handling node with IPs: map[192.168.105.6:{}]
	I0719 03:52:29.209562       1 main.go:326] Node ha-256000-m02 has CIDR [10.244.1.0/24] 
	I0719 03:52:29.209639       1 main.go:299] Handling node with IPs: map[192.168.105.8:{}]
	I0719 03:52:29.209646       1 main.go:326] Node ha-256000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [dff4e67b6680] <==
	W0719 03:36:38.357891       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0719 03:36:38.358258       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 03:36:38.359450       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 03:36:39.162576       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 03:36:39.259455       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 03:36:39.263308       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 03:36:39.266876       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 03:36:53.692820       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 03:36:53.723447       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0719 03:38:39.230077       1 trace.go:236] Trace[99535700]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.105.5,type:*v1.Endpoints,resource:apiServerIPInfo (19-Jul-2024 03:38:38.359) (total time: 870ms):
	Trace[99535700]: ---"initial value restored" 856ms (03:38:39.216)
	Trace[99535700]: [870.770259ms] [870.770259ms] END
	E0719 03:51:35.729254       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50022: use of closed network connection
	E0719 03:51:35.841233       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50024: use of closed network connection
	E0719 03:51:36.030728       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50029: use of closed network connection
	E0719 03:51:36.142429       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50031: use of closed network connection
	E0719 03:51:36.323525       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50036: use of closed network connection
	E0719 03:51:36.429306       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50038: use of closed network connection
	E0719 03:51:37.668910       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50053: use of closed network connection
	E0719 03:51:37.774366       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50055: use of closed network connection
	E0719 03:51:37.880279       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50057: use of closed network connection
	E0719 03:51:37.986190       1 conn.go:339] Error on socket receive: read tcp 192.168.105.254:8443->192.168.105.1:50059: use of closed network connection
	I0719 03:51:52.115940       1 trace.go:236] Trace[1625868550]: "Get" accept:application/json, */*,audit-id:4eb328c7-12ab-428c-8442-ad69a0af68f3,client:192.168.105.5,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/arm64) kubernetes/$Format,verb:GET (19-Jul-2024 03:51:51.267) (total time: 848ms):
	Trace[1625868550]: ---"About to write a response" 848ms (03:51:52.115)
	Trace[1625868550]: [848.499978ms] [848.499978ms] END
	
	
	==> kube-controller-manager [70ffd55232c0] <==
	I0719 03:37:23.294186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.391µs"
	I0719 03:37:23.772649       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0719 03:38:01.950412       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-256000-m02\" does not exist"
	I0719 03:38:01.956739       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-256000-m02" podCIDRs=["10.244.1.0/24"]
	I0719 03:38:03.779798       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-256000-m02"
	I0719 03:39:54.715082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.549011ms"
	I0719 03:39:54.728524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.544471ms"
	I0719 03:39:54.760521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.962639ms"
	I0719 03:39:54.798120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.556155ms"
	I0719 03:39:54.810232       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.068766ms"
	I0719 03:39:54.810338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.794µs"
	I0719 03:39:56.791240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.855498ms"
	I0719 03:39:56.791390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.29µs"
	I0719 03:39:57.235525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.740732ms"
	I0719 03:39:57.236806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.25502ms"
	I0719 03:52:03.930437       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-256000-m04\" does not exist"
	I0719 03:52:03.936556       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-256000-m04" podCIDRs=["10.244.2.0/24"]
	I0719 03:52:04.000931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.293µs"
	I0719 03:52:08.902831       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-256000-m04"
	I0719 03:52:24.841255       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-256000-m04"
	I0719 03:52:24.852341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.751µs"
	I0719 03:52:24.857696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.876µs"
	I0719 03:52:24.862188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.168µs"
	I0719 03:52:26.804188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.587843ms"
	I0719 03:52:26.804338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.251µs"
	
	
	==> kube-proxy [9103cd3e30ac] <==
	I0719 03:36:54.228395       1 server_linux.go:69] "Using iptables proxy"
	I0719 03:36:54.235224       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.5"]
	I0719 03:36:54.286000       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 03:36:54.286028       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 03:36:54.286039       1 server_linux.go:165] "Using iptables Proxier"
	I0719 03:36:54.287034       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 03:36:54.287396       1 server.go:872] "Version info" version="v1.30.3"
	I0719 03:36:54.287403       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:36:54.288184       1 config.go:192] "Starting service config controller"
	I0719 03:36:54.288259       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 03:36:54.288280       1 config.go:319] "Starting node config controller"
	I0719 03:36:54.288282       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 03:36:54.289304       1 config.go:101] "Starting endpoint slice config controller"
	I0719 03:36:54.289308       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 03:36:54.388688       1 shared_informer.go:320] Caches are synced for node config
	I0719 03:36:54.388711       1 shared_informer.go:320] Caches are synced for service config
	I0719 03:36:54.389972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [29f96bba40d3] <==
	W0719 03:36:38.043369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 03:36:38.043491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 03:36:38.078796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:38.078841       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.135286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 03:36:38.135302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 03:36:38.143595       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 03:36:38.143607       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 03:36:40.612937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 03:39:54.727744       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5922h\": pod busybox-fc5497c4f-5922h is already assigned to node \"ha-256000\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-5922h" node="ha-256000"
	E0719 03:39:54.727817       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1bb5b7eb-c669-43f7-ac3f-753596620b94(default/busybox-fc5497c4f-5922h) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-5922h"
	E0719 03:39:54.727832       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5922h\": pod busybox-fc5497c4f-5922h is already assigned to node \"ha-256000\"" pod="default/busybox-fc5497c4f-5922h"
	I0719 03:39:54.727844       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-5922h" node="ha-256000"
	E0719 03:52:03.953546       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2l55x\": pod kube-proxy-2l55x is already assigned to node \"ha-256000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2l55x" node="ha-256000-m04"
	E0719 03:52:03.955782       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod be2735f1-1760-45b5-87ff-6b6f4b5b8ac7(kube-system/kube-proxy-2l55x) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2l55x"
	E0719 03:52:03.955820       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2l55x\": pod kube-proxy-2l55x is already assigned to node \"ha-256000-m04\"" pod="kube-system/kube-proxy-2l55x"
	I0719 03:52:03.955838       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2l55x" node="ha-256000-m04"
	E0719 03:52:03.954010       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5jkfp\": pod kindnet-5jkfp is already assigned to node \"ha-256000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5jkfp" node="ha-256000-m04"
	E0719 03:52:03.957289       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c177fda4-9d9e-4d84-84af-339aedfeb9b0(kube-system/kindnet-5jkfp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5jkfp"
	E0719 03:52:03.957300       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5jkfp\": pod kindnet-5jkfp is already assigned to node \"ha-256000-m04\"" pod="kube-system/kindnet-5jkfp"
	I0719 03:52:03.957375       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5jkfp" node="ha-256000-m04"
	E0719 03:52:24.851934       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-hkhd4\": pod busybox-fc5497c4f-hkhd4 is already assigned to node \"ha-256000-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-hkhd4" node="ha-256000-m04"
	E0719 03:52:24.851965       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b5e17355-2549-46bd-a210-89247efbd5dd(default/busybox-fc5497c4f-hkhd4) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-hkhd4"
	E0719 03:52:24.851975       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-hkhd4\": pod busybox-fc5497c4f-hkhd4 is already assigned to node \"ha-256000-m04\"" pod="default/busybox-fc5497c4f-hkhd4"
	I0719 03:52:24.851984       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-hkhd4" node="ha-256000-m04"
	
	
	==> kubelet <==
	Jul 19 03:47:39 ha-256000 kubelet[2215]: E0719 03:47:39.079617    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:47:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:47:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:48:39 ha-256000 kubelet[2215]: E0719 03:48:39.080370    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:48:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:48:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:49:39 ha-256000 kubelet[2215]: E0719 03:49:39.079647    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:49:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:49:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:50:39 ha-256000 kubelet[2215]: E0719 03:50:39.079658    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:50:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:50:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:51:39 ha-256000 kubelet[2215]: E0719 03:51:39.080297    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:51:39 ha-256000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:51:39 ha-256000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:51:39 ha-256000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:51:39 ha-256000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ha-256000 -n ha-256000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-256000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-256000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-256000 node stop m02 -v=7 --alsologtostderr: (12.18917975s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-256000 status -v=7 --alsologtostderr
E0718 20:53:59.650522    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:55:12.996900    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-256000 status -v=7 --alsologtostderr: exit status 7 (2m55.969473833s)

                                                
                                                
-- stdout --
	ha-256000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-256000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-256000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-256000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:52:44.965670    5228 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:52:44.965986    5228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:52:44.965991    5228 out.go:304] Setting ErrFile to fd 2...
	I0718 20:52:44.965993    5228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:52:44.966134    5228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:52:44.966265    5228 out.go:298] Setting JSON to false
	I0718 20:52:44.966279    5228 mustload.go:65] Loading cluster: ha-256000
	I0718 20:52:44.966342    5228 notify.go:220] Checking for updates...
	I0718 20:52:44.966519    5228 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:52:44.966528    5228 status.go:255] checking status of ha-256000 ...
	I0718 20:52:44.967630    5228 status.go:330] ha-256000 host status = "Running" (err=<nil>)
	I0718 20:52:44.967675    5228 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:52:44.967938    5228 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:52:44.968109    5228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:52:44.968120    5228 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	W0718 20:53:10.892456    5228 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0718 20:53:10.892583    5228 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0718 20:53:10.892595    5228 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0718 20:53:10.892608    5228 status.go:257] ha-256000 status: &{Name:ha-256000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 20:53:10.892626    5228 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0718 20:53:10.892632    5228 status.go:255] checking status of ha-256000-m02 ...
	I0718 20:53:10.892900    5228 status.go:330] ha-256000-m02 host status = "Stopped" (err=<nil>)
	I0718 20:53:10.892905    5228 status.go:343] host is not running, skipping remaining checks
	I0718 20:53:10.892908    5228 status.go:257] ha-256000-m02 status: &{Name:ha-256000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:53:10.892914    5228 status.go:255] checking status of ha-256000-m03 ...
	I0718 20:53:10.893586    5228 status.go:330] ha-256000-m03 host status = "Running" (err=<nil>)
	I0718 20:53:10.893596    5228 host.go:66] Checking if "ha-256000-m03" exists ...
	I0718 20:53:10.893712    5228 host.go:66] Checking if "ha-256000-m03" exists ...
	I0718 20:53:10.893838    5228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:53:10.893844    5228 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	W0718 20:54:25.893367    5228 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0718 20:54:25.893416    5228 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0718 20:54:25.893423    5228 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0718 20:54:25.893427    5228 status.go:257] ha-256000-m03 status: &{Name:ha-256000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 20:54:25.893435    5228 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0718 20:54:25.893439    5228 status.go:255] checking status of ha-256000-m04 ...
	I0718 20:54:25.894143    5228 status.go:330] ha-256000-m04 host status = "Running" (err=<nil>)
	I0718 20:54:25.894150    5228 host.go:66] Checking if "ha-256000-m04" exists ...
	I0718 20:54:25.894244    5228 host.go:66] Checking if "ha-256000-m04" exists ...
	I0718 20:54:25.894360    5228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:54:25.894366    5228 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m04/id_rsa Username:docker}
	W0718 20:55:40.894383    5228 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0718 20:55:40.894431    5228 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0718 20:55:40.894439    5228 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0718 20:55:40.894442    5228 status.go:257] ha-256000-m04 status: &{Name:ha-256000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0718 20:55:40.894452    5228 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-256000 status -v=7 --alsologtostderr": ha-256000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-256000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-256000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-256000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-256000 status -v=7 --alsologtostderr": ha-256000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-256000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-256000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-256000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-256000 status -v=7 --alsologtostderr": ha-256000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-256000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-256000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-256000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000: exit status 3 (25.963254584s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 20:56:06.853086    5263 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0718 20:56:06.853136    5263 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-256000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0718 20:56:36.063661    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.782217709s)
ha_test.go:413: expected profile "ha-256000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-256000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-256000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-256000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000: exit status 3 (25.998164417s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 20:57:50.636395    5289 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0718 20:57:50.636408    5289 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-256000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-256000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-256000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.079621s)

                                                
                                                
-- stdout --
	* Starting "ha-256000-m02" control-plane node in "ha-256000" cluster
	* Restarting existing qemu2 VM for "ha-256000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-256000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:57:50.668452    5294 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:57:50.668758    5294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:57:50.668762    5294 out.go:304] Setting ErrFile to fd 2...
	I0718 20:57:50.668764    5294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:57:50.668905    5294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:57:50.669170    5294 mustload.go:65] Loading cluster: ha-256000
	I0718 20:57:50.669383    5294 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0718 20:57:50.669631    5294 host.go:58] "ha-256000-m02" host status: Stopped
	I0718 20:57:50.672679    5294 out.go:177] * Starting "ha-256000-m02" control-plane node in "ha-256000" cluster
	I0718 20:57:50.676612    5294 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:57:50.676641    5294 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 20:57:50.676649    5294 cache.go:56] Caching tarball of preloaded images
	I0718 20:57:50.676753    5294 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 20:57:50.676760    5294 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:57:50.676828    5294 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 20:57:50.677176    5294 start.go:360] acquireMachinesLock for ha-256000-m02: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:57:50.677229    5294 start.go:364] duration metric: took 38.875µs to acquireMachinesLock for "ha-256000-m02"
	I0718 20:57:50.677237    5294 start.go:96] Skipping create...Using existing machine configuration
	I0718 20:57:50.677244    5294 fix.go:54] fixHost starting: m02
	I0718 20:57:50.677352    5294 fix.go:112] recreateIfNeeded on ha-256000-m02: state=Stopped err=<nil>
	W0718 20:57:50.677361    5294 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 20:57:50.681470    5294 out.go:177] * Restarting existing qemu2 VM for "ha-256000-m02" ...
	I0718 20:57:50.684601    5294 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:57:50.684667    5294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e8:07:38:73:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:57:50.687471    5294 main.go:141] libmachine: STDOUT: 
	I0718 20:57:50.687491    5294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 20:57:50.687517    5294 fix.go:56] duration metric: took 10.272708ms for fixHost
	I0718 20:57:50.687522    5294 start.go:83] releasing machines lock for "ha-256000-m02", held for 10.288625ms
	W0718 20:57:50.687527    5294 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 20:57:50.687556    5294 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 20:57:50.687560    5294 start.go:729] Will try again in 5 seconds ...
	I0718 20:57:55.689467    5294 start.go:360] acquireMachinesLock for ha-256000-m02: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 20:57:55.689572    5294 start.go:364] duration metric: took 79.584µs to acquireMachinesLock for "ha-256000-m02"
	I0718 20:57:55.689604    5294 start.go:96] Skipping create...Using existing machine configuration
	I0718 20:57:55.689609    5294 fix.go:54] fixHost starting: m02
	I0718 20:57:55.689763    5294 fix.go:112] recreateIfNeeded on ha-256000-m02: state=Stopped err=<nil>
	W0718 20:57:55.689771    5294 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 20:57:55.693577    5294 out.go:177] * Restarting existing qemu2 VM for "ha-256000-m02" ...
	I0718 20:57:55.697434    5294 qemu.go:418] Using hvf for hardware acceleration
	I0718 20:57:55.697465    5294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e8:07:38:73:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
	I0718 20:57:55.699467    5294 main.go:141] libmachine: STDOUT: 
	I0718 20:57:55.699485    5294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 20:57:55.699515    5294 fix.go:56] duration metric: took 9.906375ms for fixHost
	I0718 20:57:55.699519    5294 start.go:83] releasing machines lock for "ha-256000-m02", held for 9.94125ms
	W0718 20:57:55.699558    5294 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-256000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-256000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 20:57:55.702602    5294 out.go:177] 
	W0718 20:57:55.706598    5294 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 20:57:55.706603    5294 out.go:239] * 
	* 
	W0718 20:57:55.708555    5294 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 20:57:55.712561    5294 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0718 20:57:50.668452    5294 out.go:291] Setting OutFile to fd 1 ...
I0718 20:57:50.668758    5294 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:57:50.668762    5294 out.go:304] Setting ErrFile to fd 2...
I0718 20:57:50.668764    5294 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:57:50.668905    5294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
I0718 20:57:50.669170    5294 mustload.go:65] Loading cluster: ha-256000
I0718 20:57:50.669383    5294 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0718 20:57:50.669631    5294 host.go:58] "ha-256000-m02" host status: Stopped
I0718 20:57:50.672679    5294 out.go:177] * Starting "ha-256000-m02" control-plane node in "ha-256000" cluster
I0718 20:57:50.676612    5294 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0718 20:57:50.676641    5294 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0718 20:57:50.676649    5294 cache.go:56] Caching tarball of preloaded images
I0718 20:57:50.676753    5294 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0718 20:57:50.676760    5294 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0718 20:57:50.676828    5294 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
I0718 20:57:50.677176    5294 start.go:360] acquireMachinesLock for ha-256000-m02: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0718 20:57:50.677229    5294 start.go:364] duration metric: took 38.875µs to acquireMachinesLock for "ha-256000-m02"
I0718 20:57:50.677237    5294 start.go:96] Skipping create...Using existing machine configuration
I0718 20:57:50.677244    5294 fix.go:54] fixHost starting: m02
I0718 20:57:50.677352    5294 fix.go:112] recreateIfNeeded on ha-256000-m02: state=Stopped err=<nil>
W0718 20:57:50.677361    5294 fix.go:138] unexpected machine state, will restart: <nil>
I0718 20:57:50.681470    5294 out.go:177] * Restarting existing qemu2 VM for "ha-256000-m02" ...
I0718 20:57:50.684601    5294 qemu.go:418] Using hvf for hardware acceleration
I0718 20:57:50.684667    5294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e8:07:38:73:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
I0718 20:57:50.687471    5294 main.go:141] libmachine: STDOUT: 
I0718 20:57:50.687491    5294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0718 20:57:50.687517    5294 fix.go:56] duration metric: took 10.272708ms for fixHost
I0718 20:57:50.687522    5294 start.go:83] releasing machines lock for "ha-256000-m02", held for 10.288625ms
W0718 20:57:50.687527    5294 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0718 20:57:50.687556    5294 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0718 20:57:50.687560    5294 start.go:729] Will try again in 5 seconds ...
I0718 20:57:55.689467    5294 start.go:360] acquireMachinesLock for ha-256000-m02: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0718 20:57:55.689572    5294 start.go:364] duration metric: took 79.584µs to acquireMachinesLock for "ha-256000-m02"
I0718 20:57:55.689604    5294 start.go:96] Skipping create...Using existing machine configuration
I0718 20:57:55.689609    5294 fix.go:54] fixHost starting: m02
I0718 20:57:55.689763    5294 fix.go:112] recreateIfNeeded on ha-256000-m02: state=Stopped err=<nil>
W0718 20:57:55.689771    5294 fix.go:138] unexpected machine state, will restart: <nil>
I0718 20:57:55.693577    5294 out.go:177] * Restarting existing qemu2 VM for "ha-256000-m02" ...
I0718 20:57:55.697434    5294 qemu.go:418] Using hvf for hardware acceleration
I0718 20:57:55.697465    5294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e8:07:38:73:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/disk.qcow2
I0718 20:57:55.699467    5294 main.go:141] libmachine: STDOUT: 
I0718 20:57:55.699485    5294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0718 20:57:55.699515    5294 fix.go:56] duration metric: took 9.906375ms for fixHost
I0718 20:57:55.699519    5294 start.go:83] releasing machines lock for "ha-256000-m02", held for 9.94125ms
W0718 20:57:55.699558    5294 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-256000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-256000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0718 20:57:55.702602    5294 out.go:177] 
W0718 20:57:55.706598    5294 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0718 20:57:55.706603    5294 out.go:239] * 
* 
W0718 20:57:55.708555    5294 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0718 20:57:55.712561    5294 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-256000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-256000 status -v=7 --alsologtostderr
E0718 20:58:59.642649    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 21:00:12.988553    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-256000 status -v=7 --alsologtostderr: exit status 7 (2m57.240250042s)

                                                
                                                
-- stdout --
	ha-256000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-256000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-256000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-256000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:57:55.747589    5298 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:57:55.747727    5298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:57:55.747730    5298 out.go:304] Setting ErrFile to fd 2...
	I0718 20:57:55.747732    5298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:57:55.747882    5298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:57:55.747999    5298 out.go:298] Setting JSON to false
	I0718 20:57:55.748010    5298 mustload.go:65] Loading cluster: ha-256000
	I0718 20:57:55.748053    5298 notify.go:220] Checking for updates...
	I0718 20:57:55.748226    5298 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:57:55.748232    5298 status.go:255] checking status of ha-256000 ...
	I0718 20:57:55.748932    5298 status.go:330] ha-256000 host status = "Running" (err=<nil>)
	I0718 20:57:55.748941    5298 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:57:55.749032    5298 host.go:66] Checking if "ha-256000" exists ...
	I0718 20:57:55.749142    5298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:57:55.749151    5298 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000/id_rsa Username:docker}
	W0718 20:57:55.749321    5298 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0718 20:57:55.749338    5298 retry.go:31] will retry after 168.308715ms: dial tcp 192.168.105.5:22: connect: host is down
	W0718 20:57:55.919451    5298 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0718 20:57:55.919467    5298 retry.go:31] will retry after 509.977725ms: dial tcp 192.168.105.5:22: connect: host is down
	W0718 20:57:56.431624    5298 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0718 20:57:56.431645    5298 retry.go:31] will retry after 590.806811ms: dial tcp 192.168.105.5:22: connect: host is down
	W0718 20:58:22.949481    5298 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0718 20:58:22.949535    5298 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0718 20:58:22.949544    5298 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0718 20:58:22.949548    5298 status.go:257] ha-256000 status: &{Name:ha-256000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 20:58:22.949560    5298 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0718 20:58:22.949564    5298 status.go:255] checking status of ha-256000-m02 ...
	I0718 20:58:22.949756    5298 status.go:330] ha-256000-m02 host status = "Stopped" (err=<nil>)
	I0718 20:58:22.949762    5298 status.go:343] host is not running, skipping remaining checks
	I0718 20:58:22.949764    5298 status.go:257] ha-256000-m02 status: &{Name:ha-256000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:58:22.949769    5298 status.go:255] checking status of ha-256000-m03 ...
	I0718 20:58:22.950371    5298 status.go:330] ha-256000-m03 host status = "Running" (err=<nil>)
	I0718 20:58:22.950378    5298 host.go:66] Checking if "ha-256000-m03" exists ...
	I0718 20:58:22.950493    5298 host.go:66] Checking if "ha-256000-m03" exists ...
	I0718 20:58:22.950614    5298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:58:22.950620    5298 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	W0718 20:59:37.950853    5298 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0718 20:59:37.950913    5298 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0718 20:59:37.950929    5298 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0718 20:59:37.950933    5298 status.go:257] ha-256000-m03 status: &{Name:ha-256000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 20:59:37.950942    5298 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0718 20:59:37.950945    5298 status.go:255] checking status of ha-256000-m04 ...
	I0718 20:59:37.951653    5298 status.go:330] ha-256000-m04 host status = "Running" (err=<nil>)
	I0718 20:59:37.951661    5298 host.go:66] Checking if "ha-256000-m04" exists ...
	I0718 20:59:37.951768    5298 host.go:66] Checking if "ha-256000-m04" exists ...
	I0718 20:59:37.951890    5298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:59:37.951895    5298 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m04/id_rsa Username:docker}
	W0718 21:00:52.951798    5298 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0718 21:00:52.951851    5298 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0718 21:00:52.951860    5298 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0718 21:00:52.951864    5298 status.go:257] ha-256000-m04 status: &{Name:ha-256000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:00:52.951873    5298 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-256000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000: exit status 3 (25.960911666s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:01:18.906345    5601 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0718 21:01:18.906386    5601 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-256000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (237.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-256000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-256000 -v=7 --alsologtostderr
E0718 21:03:59.630497    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 21:05:12.977075    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-darwin-arm64 stop -p ha-256000 -v=7 --alsologtostderr: signal: killed (3m31.137312083s)

                                                
                                                
-- stdout --
	* Stopping node "ha-256000-m04"  ...
	* Stopping node "ha-256000-m03"  ...
	* Stopping node "ha-256000-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:02:35.981320    5621 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:02:35.981741    5621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:02:35.981745    5621 out.go:304] Setting ErrFile to fd 2...
	I0718 21:02:35.981747    5621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:02:35.981891    5621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:02:35.982070    5621 out.go:298] Setting JSON to false
	I0718 21:02:35.982307    5621 mustload.go:65] Loading cluster: ha-256000
	I0718 21:02:35.982503    5621 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:02:35.982558    5621 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/ha-256000/config.json ...
	I0718 21:02:35.982825    5621 mustload.go:65] Loading cluster: ha-256000
	I0718 21:02:35.982906    5621 config.go:182] Loaded profile config "ha-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:02:35.982929    5621 stop.go:39] StopHost: ha-256000-m04
	I0718 21:02:35.987035    5621 out.go:177] * Stopping node "ha-256000-m04"  ...
	I0718 21:02:35.993927    5621 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0718 21:02:35.993973    5621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0718 21:02:35.993985    5621 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m04/id_rsa Username:docker}
	W0718 21:03:50.994435    5621 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0718 21:03:50.994741    5621 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0718 21:03:50.994905    5621 main.go:141] libmachine: Stopping "ha-256000-m04"...
	I0718 21:04:03.011734    5621 main.go:141] libmachine: Machine "ha-256000-m04" was stopped.
	I0718 21:04:03.011756    5621 stop.go:75] duration metric: took 1m27.020355667s to stop
	I0718 21:04:03.011778    5621 stop.go:39] StopHost: ha-256000-m03
	I0718 21:04:03.020160    5621 out.go:177] * Stopping node "ha-256000-m03"  ...
	I0718 21:04:03.024106    5621 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0718 21:04:03.024153    5621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0718 21:04:03.024162    5621 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m03/id_rsa Username:docker}
	W0718 21:05:18.024556    5621 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0718 21:05:18.024771    5621 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0718 21:05:18.024838    5621 main.go:141] libmachine: Stopping "ha-256000-m03"...
	I0718 21:06:00.085082    5621 main.go:141] libmachine: Machine "ha-256000-m03" was stopped.
	I0718 21:06:00.085142    5621 stop.go:75] duration metric: took 1m57.064415792s to stop
	I0718 21:06:00.085182    5621 stop.go:39] StopHost: ha-256000-m02
	I0718 21:06:00.095715    5621 out.go:177] * Stopping node "ha-256000-m02"  ...
	I0718 21:06:00.100620    5621 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0718 21:06:00.101194    5621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0718 21:06:00.101244    5621 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/ha-256000-m02/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-darwin-arm64 node list -p ha-256000 -v=7 --alsologtostderr" : signal: killed
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-256000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-256000 --wait=true -v=7 --alsologtostderr: context deadline exceeded (2.583µs)
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-256000 -v=7 --alsologtostderr" : context deadline exceeded
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-256000
ha_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 node list -p ha-256000: context deadline exceeded (417ns)
ha_test.go:474: failed to run node list. args "out/minikube-darwin-arm64 node list -p ha-256000" : context deadline exceeded
ha_test.go:479: reported node list is not the same after restart. Before restart: ha-256000	192.168.105.5
ha-256000-m02	192.168.105.6
ha-256000-m03	192.168.105.7
ha-256000-m04	192.168.105.8

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-256000 -n ha-256000: exit status 3 (26.003874625s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:06:33.078864    5658 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0718 21:06:33.078904    5658 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-256000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (237.21s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-935000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-935000 --driver=qemu2 : exit status 80 (9.979143541s)

                                                
                                                
-- stdout --
	* [image-935000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-935000" primary control-plane node in "image-935000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-935000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-935000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-935000 -n image-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-935000 -n image-935000: exit status 7 (67.226709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-615000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-615000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.841983792s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6254a4d8-45d6-435e-8c04-ca80c8959d52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-615000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"52fb22b7-bbed-47e9-835c-a7804cd573cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"22ef9f89-a1e6-46d7-af05-dc92c01d4c2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig"}}
	{"specversion":"1.0","id":"d7f6210b-55b2-41cd-a25c-bfda7bb5d01d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"634931ed-d8dd-41dc-801a-bea09fe7f38f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a6bb6218-4cee-43c8-8b5c-40577fa5e802","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube"}}
	{"specversion":"1.0","id":"922653f9-887a-4145-868d-151a18c5a751","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1185f3a4-104b-47ce-a6fc-9b6e4552c9b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a671b936-b5f3-4235-8b74-c3615233e73d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"d2acdcc8-6042-470c-b407-a81db68b3f50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-615000\" primary control-plane node in \"json-output-615000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4cea8b5f-5155-4b58-b0e9-6b109bc8f3ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"06d21dc0-2a69-4b9e-a0fd-aeacdd9d2a65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-615000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"52a01842-5de8-48d7-8d61-53dcae20cccb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"024b7bf7-a5bb-4f8e-8678-62befff88386","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"3fbc6390-57de-431c-80a5-7a0786f63722","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-615000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"290304b1-08b5-4f70-8da2-7805c34e2309","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"f510e57e-01b2-4793-9592-c6aef4492241","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-615000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-615000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-615000 --output=json --user=testUser: exit status 83 (75.31125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"923e456f-dc21-4c85-b3ad-ec58c784999f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-615000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"799f9f42-eeb8-4fc1-b33b-a701699cdf91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-615000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-615000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-615000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-615000 --output=json --user=testUser: exit status 83 (45.233625ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-615000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-615000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-615000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-615000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-542000 --driver=qemu2 
E0718 21:07:02.695502    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-542000 --driver=qemu2 : exit status 80 (9.860372458s)

                                                
                                                
-- stdout --
	* [first-542000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-542000" primary control-plane node in "first-542000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-542000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-542000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-542000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-18 21:07:12.237249 -0700 PDT m=+2542.317417917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-543000 -n second-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-543000 -n second-543000: exit status 85 (80.174708ms)

                                                
                                                
-- stdout --
	* Profile "second-543000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-543000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-543000" host is not running, skipping log retrieval (state="* Profile \"second-543000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-543000\"")
helpers_test.go:175: Cleaning up "second-543000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-543000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-18 21:07:12.418461 -0700 PDT m=+2542.498635167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-542000 -n first-542000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-542000 -n first-542000: exit status 7 (29.198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-542000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-542000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-542000
--- FAIL: TestMinikubeProfile (10.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-647000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-647000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.978985791s)

                                                
                                                
-- stdout --
	* [mount-start-1-647000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-647000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-647000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-647000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-647000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-647000 -n mount-start-1-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-647000 -n mount-start-1-647000: exit status 7 (68.924333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-647000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-024000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-024000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.751793875s)

                                                
                                                
-- stdout --
	* [multinode-024000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-024000" primary control-plane node in "multinode-024000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-024000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:07:22.763400    5835 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:07:22.763540    5835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:07:22.763543    5835 out.go:304] Setting ErrFile to fd 2...
	I0718 21:07:22.763545    5835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:07:22.763683    5835 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:07:22.764714    5835 out.go:298] Setting JSON to false
	I0718 21:07:22.780927    5835 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4010,"bootTime":1721358032,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:07:22.780995    5835 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:07:22.785229    5835 out.go:177] * [multinode-024000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:07:22.792079    5835 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:07:22.792134    5835 notify.go:220] Checking for updates...
	I0718 21:07:22.797394    5835 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:07:22.800053    5835 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:07:22.803145    5835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:07:22.806150    5835 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:07:22.809107    5835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:07:22.812291    5835 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:07:22.816091    5835 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:07:22.823127    5835 start.go:297] selected driver: qemu2
	I0718 21:07:22.823135    5835 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:07:22.823142    5835 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:07:22.825412    5835 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:07:22.828098    5835 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:07:22.831130    5835 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:07:22.831173    5835 cni.go:84] Creating CNI manager for ""
	I0718 21:07:22.831179    5835 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0718 21:07:22.831187    5835 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0718 21:07:22.831222    5835 start.go:340] cluster config:
	{Name:multinode-024000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-024000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:07:22.834911    5835 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:07:22.841965    5835 out.go:177] * Starting "multinode-024000" primary control-plane node in "multinode-024000" cluster
	I0718 21:07:22.846099    5835 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:07:22.846115    5835 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:07:22.846132    5835 cache.go:56] Caching tarball of preloaded images
	I0718 21:07:22.846218    5835 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:07:22.846224    5835 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:07:22.846417    5835 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/multinode-024000/config.json ...
	I0718 21:07:22.846430    5835 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/multinode-024000/config.json: {Name:mkf0a03c1f0afca412169862960dd16c2764ab49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:07:22.846629    5835 start.go:360] acquireMachinesLock for multinode-024000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:07:22.846664    5835 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "multinode-024000"
	I0718 21:07:22.846675    5835 start.go:93] Provisioning new machine with config: &{Name:multinode-024000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-024000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:07:22.846710    5835 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:07:22.854126    5835 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:07:22.871994    5835 start.go:159] libmachine.API.Create for "multinode-024000" (driver="qemu2")
	I0718 21:07:22.872021    5835 client.go:168] LocalClient.Create starting
	I0718 21:07:22.872083    5835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:07:22.872115    5835 main.go:141] libmachine: Decoding PEM data...
	I0718 21:07:22.872125    5835 main.go:141] libmachine: Parsing certificate...
	I0718 21:07:22.872162    5835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:07:22.872185    5835 main.go:141] libmachine: Decoding PEM data...
	I0718 21:07:22.872193    5835 main.go:141] libmachine: Parsing certificate...
	I0718 21:07:22.872540    5835 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:07:22.995280    5835 main.go:141] libmachine: Creating SSH key...
	I0718 21:07:23.087199    5835 main.go:141] libmachine: Creating Disk image...
	I0718 21:07:23.087205    5835 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:07:23.087369    5835 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2
	I0718 21:07:23.096884    5835 main.go:141] libmachine: STDOUT: 
	I0718 21:07:23.096901    5835 main.go:141] libmachine: STDERR: 
	I0718 21:07:23.096958    5835 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2 +20000M
	I0718 21:07:23.104916    5835 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:07:23.104929    5835 main.go:141] libmachine: STDERR: 
	I0718 21:07:23.104954    5835 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2
	I0718 21:07:23.104966    5835 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:07:23.104976    5835 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:07:23.105003    5835 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:dd:e8:e1:ce:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2
	I0718 21:07:23.106617    5835 main.go:141] libmachine: STDOUT: 
	I0718 21:07:23.106632    5835 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:07:23.106652    5835 client.go:171] duration metric: took 234.634417ms to LocalClient.Create
	I0718 21:07:25.108853    5835 start.go:128] duration metric: took 2.262185417s to createHost
	I0718 21:07:25.108917    5835 start.go:83] releasing machines lock for "multinode-024000", held for 2.26230875s
	W0718 21:07:25.108966    5835 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:07:25.122172    5835 out.go:177] * Deleting "multinode-024000" in qemu2 ...
	W0718 21:07:25.144719    5835 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:07:25.144748    5835 start.go:729] Will try again in 5 seconds ...
	I0718 21:07:30.146760    5835 start.go:360] acquireMachinesLock for multinode-024000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:07:30.147162    5835 start.go:364] duration metric: took 327µs to acquireMachinesLock for "multinode-024000"
	I0718 21:07:30.147283    5835 start.go:93] Provisioning new machine with config: &{Name:multinode-024000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-024000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:07:30.147599    5835 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:07:30.159846    5835 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:07:30.211120    5835 start.go:159] libmachine.API.Create for "multinode-024000" (driver="qemu2")
	I0718 21:07:30.211169    5835 client.go:168] LocalClient.Create starting
	I0718 21:07:30.211270    5835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:07:30.211330    5835 main.go:141] libmachine: Decoding PEM data...
	I0718 21:07:30.211348    5835 main.go:141] libmachine: Parsing certificate...
	I0718 21:07:30.211405    5835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:07:30.211448    5835 main.go:141] libmachine: Decoding PEM data...
	I0718 21:07:30.211460    5835 main.go:141] libmachine: Parsing certificate...
	I0718 21:07:30.212197    5835 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:07:30.346516    5835 main.go:141] libmachine: Creating SSH key...
	I0718 21:07:30.425962    5835 main.go:141] libmachine: Creating Disk image...
	I0718 21:07:30.425967    5835 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:07:30.426136    5835 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2
	I0718 21:07:30.435286    5835 main.go:141] libmachine: STDOUT: 
	I0718 21:07:30.435304    5835 main.go:141] libmachine: STDERR: 
	I0718 21:07:30.435349    5835 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2 +20000M
	I0718 21:07:30.443239    5835 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:07:30.443255    5835 main.go:141] libmachine: STDERR: 
	I0718 21:07:30.443263    5835 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2
	I0718 21:07:30.443268    5835 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:07:30.443279    5835 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:07:30.443324    5835 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:03:74:1c:f1:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2
	I0718 21:07:30.444965    5835 main.go:141] libmachine: STDOUT: 
	I0718 21:07:30.444998    5835 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:07:30.445019    5835 client.go:171] duration metric: took 233.8505ms to LocalClient.Create
	I0718 21:07:32.447160    5835 start.go:128] duration metric: took 2.299594917s to createHost
	I0718 21:07:32.447276    5835 start.go:83] releasing machines lock for "multinode-024000", held for 2.300088208s
	W0718 21:07:32.447658    5835 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-024000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-024000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:07:32.457309    5835 out.go:177] 
	W0718 21:07:32.461320    5835 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:07:32.461343    5835 out.go:239] * 
	* 
	W0718 21:07:32.464051    5835 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:07:32.472368    5835 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-024000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (66.05375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (112.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (125.696083ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-024000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- rollout status deployment/busybox: exit status 1 (58.165916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.359125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.734042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.195542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.209791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.662875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.594125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.553083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.357375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.394958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.855458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0718 21:08:59.622042    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.179334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.215042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.10125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.968708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.991375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (30.228209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (112.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-024000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.074666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (29.224958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-024000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-024000 -v 3 --alsologtostderr: exit status 83 (37.638792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-024000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-024000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:24.788346    5930 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:24.788692    5930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:24.788696    5930 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:24.788699    5930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:24.788835    5930 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:09:24.789062    5930 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:09:24.789258    5930 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:24.794001    5930 out.go:177] * The control-plane node multinode-024000 host is not running: state=Stopped
	I0718 21:09:24.795194    5930 out.go:177]   To start a cluster, run: "minikube start -p multinode-024000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-024000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (29.371542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-024000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-024000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.082791ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-024000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-024000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-024000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (29.228167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-024000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-024000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-024000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-024000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (29.700667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status --output json --alsologtostderr: exit status 7 (28.705208ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-024000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:24.987742    5942 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:24.987878    5942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:24.987882    5942 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:24.987885    5942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:24.988019    5942 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:09:24.988127    5942 out.go:298] Setting JSON to true
	I0718 21:09:24.988136    5942 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:09:24.988207    5942 notify.go:220] Checking for updates...
	I0718 21:09:24.988355    5942 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:24.988361    5942 status.go:255] checking status of multinode-024000 ...
	I0718 21:09:24.988569    5942 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:09:24.988572    5942 status.go:343] host is not running, skipping remaining checks
	I0718 21:09:24.988574    5942 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-024000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (28.791083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 node stop m03: exit status 85 (45.949667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-024000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status: exit status 7 (29.778958ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status --alsologtostderr: exit status 7 (29.290125ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:25.122387    5950 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:25.122519    5950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:25.122523    5950 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:25.122526    5950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:25.122643    5950 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:09:25.122789    5950 out.go:298] Setting JSON to false
	I0718 21:09:25.122798    5950 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:09:25.122863    5950 notify.go:220] Checking for updates...
	I0718 21:09:25.122997    5950 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:25.123003    5950 status.go:255] checking status of multinode-024000 ...
	I0718 21:09:25.123205    5950 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:09:25.123210    5950 status.go:343] host is not running, skipping remaining checks
	I0718 21:09:25.123214    5950 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-024000 status --alsologtostderr": multinode-024000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (29.22775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (59.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.1415ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:25.180758    5954 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:25.180991    5954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:25.180994    5954 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:25.180997    5954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:25.181105    5954 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:09:25.181314    5954 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:09:25.181485    5954 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:25.185932    5954 out.go:177] 
	W0718 21:09:25.188933    5954 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0718 21:09:25.188938    5954 out.go:239] * 
	* 
	W0718 21:09:25.190482    5954 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:09:25.193794    5954 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0718 21:09:25.180758    5954 out.go:291] Setting OutFile to fd 1 ...
I0718 21:09:25.180991    5954 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 21:09:25.180994    5954 out.go:304] Setting ErrFile to fd 2...
I0718 21:09:25.180997    5954 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 21:09:25.181105    5954 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
I0718 21:09:25.181314    5954 mustload.go:65] Loading cluster: multinode-024000
I0718 21:09:25.181485    5954 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 21:09:25.185932    5954 out.go:177] 
W0718 21:09:25.188933    5954 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0718 21:09:25.188938    5954 out.go:239] * 
* 
W0718 21:09:25.190482    5954 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0718 21:09:25.193794    5954 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-024000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr: exit status 7 (29.820875ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:25.226027    5956 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:25.226183    5956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:25.226186    5956 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:25.226188    5956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:25.226324    5956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:09:25.226445    5956 out.go:298] Setting JSON to false
	I0718 21:09:25.226457    5956 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:09:25.226522    5956 notify.go:220] Checking for updates...
	I0718 21:09:25.226666    5956 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:25.226675    5956 status.go:255] checking status of multinode-024000 ...
	I0718 21:09:25.226881    5956 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:09:25.226885    5956 status.go:343] host is not running, skipping remaining checks
	I0718 21:09:25.226888    5956 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr: exit status 7 (70.930042ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:26.046615    5958 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:26.046800    5958 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:26.046805    5958 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:26.046808    5958 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:26.046981    5958 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:09:26.047128    5958 out.go:298] Setting JSON to false
	I0718 21:09:26.047139    5958 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:09:26.047168    5958 notify.go:220] Checking for updates...
	I0718 21:09:26.047420    5958 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:26.047428    5958 status.go:255] checking status of multinode-024000 ...
	I0718 21:09:26.047740    5958 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:09:26.047745    5958 status.go:343] host is not running, skipping remaining checks
	I0718 21:09:26.047747    5958 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr: exit status 7 (74.371375ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:26.921223    5960 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:26.921446    5960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:26.921451    5960 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:26.921455    5960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:26.921634    5960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:09:26.921812    5960 out.go:298] Setting JSON to false
	I0718 21:09:26.921826    5960 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:09:26.921867    5960 notify.go:220] Checking for updates...
	I0718 21:09:26.922116    5960 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:26.922128    5960 status.go:255] checking status of multinode-024000 ...
	I0718 21:09:26.922434    5960 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:09:26.922439    5960 status.go:343] host is not running, skipping remaining checks
	I0718 21:09:26.922443    5960 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr: exit status 7 (72.381041ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:29.710922    5962 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:29.711128    5962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:29.711133    5962 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:29.711137    5962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:29.711322    5962 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:09:29.711487    5962 out.go:298] Setting JSON to false
	I0718 21:09:29.711501    5962 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:09:29.711538    5962 notify.go:220] Checking for updates...
	I0718 21:09:29.711820    5962 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:29.711832    5962 status.go:255] checking status of multinode-024000 ...
	I0718 21:09:29.712152    5962 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:09:29.712157    5962 status.go:343] host is not running, skipping remaining checks
	I0718 21:09:29.712160    5962 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr: exit status 7 (70.202209ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:34.628740    5966 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:34.628936    5966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:34.628941    5966 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:34.628944    5966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:34.629105    5966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:09:34.629264    5966 out.go:298] Setting JSON to false
	I0718 21:09:34.629276    5966 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:09:34.629318    5966 notify.go:220] Checking for updates...
	I0718 21:09:34.629559    5966 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:34.629569    5966 status.go:255] checking status of multinode-024000 ...
	I0718 21:09:34.629839    5966 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:09:34.629844    5966 status.go:343] host is not running, skipping remaining checks
	I0718 21:09:34.629846    5966 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr: exit status 7 (71.358667ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:42.103406    5971 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:42.103588    5971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:42.103593    5971 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:42.103596    5971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:42.103809    5971 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:09:42.103994    5971 out.go:298] Setting JSON to false
	I0718 21:09:42.104007    5971 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:09:42.104047    5971 notify.go:220] Checking for updates...
	I0718 21:09:42.104288    5971 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:42.104296    5971 status.go:255] checking status of multinode-024000 ...
	I0718 21:09:42.104596    5971 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:09:42.104602    5971 status.go:343] host is not running, skipping remaining checks
	I0718 21:09:42.104605    5971 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr: exit status 7 (72.9335ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:49.824285    5974 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:49.824550    5974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:49.824556    5974 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:49.824560    5974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:49.824758    5974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:09:49.824983    5974 out.go:298] Setting JSON to false
	I0718 21:09:49.824997    5974 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:09:49.825034    5974 notify.go:220] Checking for updates...
	I0718 21:09:49.825271    5974 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:49.825279    5974 status.go:255] checking status of multinode-024000 ...
	I0718 21:09:49.825550    5974 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:09:49.825556    5974 status.go:343] host is not running, skipping remaining checks
	I0718 21:09:49.825559    5974 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr: exit status 7 (71.996875ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:10:00.339981    5977 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:10:00.340169    5977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:00.340173    5977 out.go:304] Setting ErrFile to fd 2...
	I0718 21:10:00.340177    5977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:00.340375    5977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:10:00.340528    5977 out.go:298] Setting JSON to false
	I0718 21:10:00.340541    5977 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:10:00.340583    5977 notify.go:220] Checking for updates...
	I0718 21:10:00.340822    5977 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:10:00.340831    5977 status.go:255] checking status of multinode-024000 ...
	I0718 21:10:00.341111    5977 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:10:00.341116    5977 status.go:343] host is not running, skipping remaining checks
	I0718 21:10:00.341120    5977 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0718 21:10:12.968421    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr: exit status 7 (74.881291ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:10:24.444098    5984 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:10:24.444319    5984 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:24.444324    5984 out.go:304] Setting ErrFile to fd 2...
	I0718 21:10:24.444327    5984 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:24.444497    5984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:10:24.444679    5984 out.go:298] Setting JSON to false
	I0718 21:10:24.444692    5984 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:10:24.444723    5984 notify.go:220] Checking for updates...
	I0718 21:10:24.444946    5984 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:10:24.444954    5984 status.go:255] checking status of multinode-024000 ...
	I0718 21:10:24.445242    5984 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:10:24.445247    5984 status.go:343] host is not running, skipping remaining checks
	I0718 21:10:24.445250    5984 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-024000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (32.583375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (59.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-024000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-024000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-024000: (3.263299209s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-024000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-024000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219563333s)

                                                
                                                
-- stdout --
	* [multinode-024000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-024000" primary control-plane node in "multinode-024000" cluster
	* Restarting existing qemu2 VM for "multinode-024000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-024000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:10:27.833022    6008 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:10:27.833466    6008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:27.833472    6008 out.go:304] Setting ErrFile to fd 2...
	I0718 21:10:27.833476    6008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:27.833737    6008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:10:27.835365    6008 out.go:298] Setting JSON to false
	I0718 21:10:27.854965    6008 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4195,"bootTime":1721358032,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:10:27.855045    6008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:10:27.859334    6008 out.go:177] * [multinode-024000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:10:27.865358    6008 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:10:27.865405    6008 notify.go:220] Checking for updates...
	I0718 21:10:27.871287    6008 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:10:27.874286    6008 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:10:27.875522    6008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:10:27.878288    6008 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:10:27.881300    6008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:10:27.884618    6008 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:10:27.884684    6008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:10:27.889296    6008 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 21:10:27.899942    6008 start.go:297] selected driver: qemu2
	I0718 21:10:27.899952    6008 start.go:901] validating driver "qemu2" against &{Name:multinode-024000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-024000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:10:27.900060    6008 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:10:27.902646    6008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:10:27.902694    6008 cni.go:84] Creating CNI manager for ""
	I0718 21:10:27.902701    6008 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 21:10:27.902759    6008 start.go:340] cluster config:
	{Name:multinode-024000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-024000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:10:27.906639    6008 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:10:27.914350    6008 out.go:177] * Starting "multinode-024000" primary control-plane node in "multinode-024000" cluster
	I0718 21:10:27.918294    6008 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:10:27.918313    6008 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:10:27.918322    6008 cache.go:56] Caching tarball of preloaded images
	I0718 21:10:27.918390    6008 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:10:27.918397    6008 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:10:27.918461    6008 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/multinode-024000/config.json ...
	I0718 21:10:27.918864    6008 start.go:360] acquireMachinesLock for multinode-024000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:10:27.918899    6008 start.go:364] duration metric: took 29.125µs to acquireMachinesLock for "multinode-024000"
	I0718 21:10:27.918909    6008 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:10:27.918915    6008 fix.go:54] fixHost starting: 
	I0718 21:10:27.919038    6008 fix.go:112] recreateIfNeeded on multinode-024000: state=Stopped err=<nil>
	W0718 21:10:27.919046    6008 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:10:27.926251    6008 out.go:177] * Restarting existing qemu2 VM for "multinode-024000" ...
	I0718 21:10:27.930277    6008 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:10:27.930321    6008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:03:74:1c:f1:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2
	I0718 21:10:27.932522    6008 main.go:141] libmachine: STDOUT: 
	I0718 21:10:27.932542    6008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:10:27.932571    6008 fix.go:56] duration metric: took 13.655458ms for fixHost
	I0718 21:10:27.932577    6008 start.go:83] releasing machines lock for "multinode-024000", held for 13.673208ms
	W0718 21:10:27.932582    6008 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:10:27.932611    6008 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:10:27.932615    6008 start.go:729] Will try again in 5 seconds ...
	I0718 21:10:32.933784    6008 start.go:360] acquireMachinesLock for multinode-024000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:10:32.934148    6008 start.go:364] duration metric: took 258.167µs to acquireMachinesLock for "multinode-024000"
	I0718 21:10:32.934270    6008 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:10:32.934285    6008 fix.go:54] fixHost starting: 
	I0718 21:10:32.934958    6008 fix.go:112] recreateIfNeeded on multinode-024000: state=Stopped err=<nil>
	W0718 21:10:32.934987    6008 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:10:32.939429    6008 out.go:177] * Restarting existing qemu2 VM for "multinode-024000" ...
	I0718 21:10:32.946295    6008 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:10:32.946597    6008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:03:74:1c:f1:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2
	I0718 21:10:32.955887    6008 main.go:141] libmachine: STDOUT: 
	I0718 21:10:32.955968    6008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:10:32.956076    6008 fix.go:56] duration metric: took 21.774208ms for fixHost
	I0718 21:10:32.956103    6008 start.go:83] releasing machines lock for "multinode-024000", held for 21.931ms
	W0718 21:10:32.956332    6008 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-024000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-024000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:10:32.964451    6008 out.go:177] 
	W0718 21:10:32.968394    6008 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:10:32.968419    6008 out.go:239] * 
	* 
	W0718 21:10:32.971011    6008 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:10:32.978284    6008 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-024000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-024000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (32.255542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 node delete m03: exit status 83 (39.228458ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-024000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-024000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-024000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status --alsologtostderr: exit status 7 (28.952667ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:10:33.164143    6022 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:10:33.164287    6022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:33.164290    6022 out.go:304] Setting ErrFile to fd 2...
	I0718 21:10:33.164292    6022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:33.164420    6022 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:10:33.164544    6022 out.go:298] Setting JSON to false
	I0718 21:10:33.164561    6022 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:10:33.164603    6022 notify.go:220] Checking for updates...
	I0718 21:10:33.164770    6022 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:10:33.164777    6022 status.go:255] checking status of multinode-024000 ...
	I0718 21:10:33.164974    6022 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:10:33.164978    6022 status.go:343] host is not running, skipping remaining checks
	I0718 21:10:33.164980    6022 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-024000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (28.982208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-024000 stop: (2.966151125s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status: exit status 7 (64.734959ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-024000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-024000 status --alsologtostderr: exit status 7 (32.520958ms)

                                                
                                                
-- stdout --
	multinode-024000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:10:36.256982    6049 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:10:36.257144    6049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:36.257148    6049 out.go:304] Setting ErrFile to fd 2...
	I0718 21:10:36.257150    6049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:36.257298    6049 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:10:36.257411    6049 out.go:298] Setting JSON to false
	I0718 21:10:36.257420    6049 mustload.go:65] Loading cluster: multinode-024000
	I0718 21:10:36.257472    6049 notify.go:220] Checking for updates...
	I0718 21:10:36.257608    6049 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:10:36.257614    6049 status.go:255] checking status of multinode-024000 ...
	I0718 21:10:36.257821    6049 status.go:330] multinode-024000 host status = "Stopped" (err=<nil>)
	I0718 21:10:36.257824    6049 status.go:343] host is not running, skipping remaining checks
	I0718 21:10:36.257827    6049 status.go:257] multinode-024000 status: &{Name:multinode-024000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-024000 status --alsologtostderr": multinode-024000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-024000 status --alsologtostderr": multinode-024000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (28.656125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-024000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-024000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.1814725s)

                                                
                                                
-- stdout --
	* [multinode-024000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-024000" primary control-plane node in "multinode-024000" cluster
	* Restarting existing qemu2 VM for "multinode-024000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-024000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:10:36.314482    6053 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:10:36.314614    6053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:36.314617    6053 out.go:304] Setting ErrFile to fd 2...
	I0718 21:10:36.314620    6053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:10:36.314735    6053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:10:36.315651    6053 out.go:298] Setting JSON to false
	I0718 21:10:36.331372    6053 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4204,"bootTime":1721358032,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:10:36.331442    6053 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:10:36.335024    6053 out.go:177] * [multinode-024000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:10:36.341846    6053 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:10:36.341889    6053 notify.go:220] Checking for updates...
	I0718 21:10:36.348905    6053 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:10:36.351843    6053 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:10:36.354925    6053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:10:36.357933    6053 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:10:36.360873    6053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:10:36.364124    6053 config.go:182] Loaded profile config "multinode-024000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:10:36.364411    6053 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:10:36.368879    6053 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 21:10:36.375879    6053 start.go:297] selected driver: qemu2
	I0718 21:10:36.375884    6053 start.go:901] validating driver "qemu2" against &{Name:multinode-024000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-024000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:10:36.375932    6053 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:10:36.378076    6053 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:10:36.378097    6053 cni.go:84] Creating CNI manager for ""
	I0718 21:10:36.378103    6053 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 21:10:36.378145    6053 start.go:340] cluster config:
	{Name:multinode-024000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-024000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:10:36.381532    6053 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:10:36.388875    6053 out.go:177] * Starting "multinode-024000" primary control-plane node in "multinode-024000" cluster
	I0718 21:10:36.392893    6053 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:10:36.392908    6053 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:10:36.392919    6053 cache.go:56] Caching tarball of preloaded images
	I0718 21:10:36.392974    6053 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:10:36.392982    6053 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:10:36.393048    6053 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/multinode-024000/config.json ...
	I0718 21:10:36.393432    6053 start.go:360] acquireMachinesLock for multinode-024000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:10:36.393459    6053 start.go:364] duration metric: took 20.916µs to acquireMachinesLock for "multinode-024000"
	I0718 21:10:36.393467    6053 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:10:36.393473    6053 fix.go:54] fixHost starting: 
	I0718 21:10:36.393585    6053 fix.go:112] recreateIfNeeded on multinode-024000: state=Stopped err=<nil>
	W0718 21:10:36.393593    6053 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:10:36.396847    6053 out.go:177] * Restarting existing qemu2 VM for "multinode-024000" ...
	I0718 21:10:36.404871    6053 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:10:36.404905    6053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:03:74:1c:f1:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2
	I0718 21:10:36.406812    6053 main.go:141] libmachine: STDOUT: 
	I0718 21:10:36.406834    6053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:10:36.406860    6053 fix.go:56] duration metric: took 13.387625ms for fixHost
	I0718 21:10:36.406865    6053 start.go:83] releasing machines lock for "multinode-024000", held for 13.402667ms
	W0718 21:10:36.406871    6053 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:10:36.406906    6053 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:10:36.406910    6053 start.go:729] Will try again in 5 seconds ...
	I0718 21:10:41.408977    6053 start.go:360] acquireMachinesLock for multinode-024000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:10:41.409618    6053 start.go:364] duration metric: took 527.541µs to acquireMachinesLock for "multinode-024000"
	I0718 21:10:41.409810    6053 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:10:41.409834    6053 fix.go:54] fixHost starting: 
	I0718 21:10:41.410634    6053 fix.go:112] recreateIfNeeded on multinode-024000: state=Stopped err=<nil>
	W0718 21:10:41.410661    6053 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:10:41.419101    6053 out.go:177] * Restarting existing qemu2 VM for "multinode-024000" ...
	I0718 21:10:41.423110    6053 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:10:41.423362    6053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:03:74:1c:f1:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/multinode-024000/disk.qcow2
	I0718 21:10:41.433326    6053 main.go:141] libmachine: STDOUT: 
	I0718 21:10:41.433882    6053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:10:41.433963    6053 fix.go:56] duration metric: took 24.134375ms for fixHost
	I0718 21:10:41.433980    6053 start.go:83] releasing machines lock for "multinode-024000", held for 24.300083ms
	W0718 21:10:41.434116    6053 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-024000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-024000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:10:41.441167    6053 out.go:177] 
	W0718 21:10:41.445025    6053 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:10:41.445064    6053 out.go:239] * 
	* 
	W0718 21:10:41.447470    6053 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:10:41.456170    6053 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-024000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (67.873959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-024000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-024000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-024000-m01 --driver=qemu2 : exit status 80 (9.771140542s)

                                                
                                                
-- stdout --
	* [multinode-024000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-024000-m01" primary control-plane node in "multinode-024000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-024000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-024000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-024000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-024000-m02 --driver=qemu2 : exit status 80 (9.865968375s)

                                                
                                                
-- stdout --
	* [multinode-024000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-024000-m02" primary control-plane node in "multinode-024000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-024000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-024000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-024000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-024000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-024000: exit status 83 (80.974375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-024000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-024000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-024000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-024000 -n multinode-024000: exit status 7 (30.290792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-024000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.86s)

                                                
                                    
x
+
TestPreload (9.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.774287708s)

                                                
                                                
-- stdout --
	* [test-preload-828000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-828000" primary control-plane node in "test-preload-828000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-828000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:11:01.521559    6113 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:11:01.521675    6113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:11:01.521678    6113 out.go:304] Setting ErrFile to fd 2...
	I0718 21:11:01.521680    6113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:11:01.521803    6113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:11:01.522856    6113 out.go:298] Setting JSON to false
	I0718 21:11:01.538787    6113 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4229,"bootTime":1721358032,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:11:01.538868    6113 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:11:01.544929    6113 out.go:177] * [test-preload-828000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:11:01.550953    6113 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:11:01.550998    6113 notify.go:220] Checking for updates...
	I0718 21:11:01.557873    6113 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:11:01.560859    6113 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:11:01.563930    6113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:11:01.566868    6113 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:11:01.569879    6113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:11:01.573269    6113 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:11:01.573333    6113 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:11:01.576756    6113 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:11:01.583893    6113 start.go:297] selected driver: qemu2
	I0718 21:11:01.583901    6113 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:11:01.583908    6113 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:11:01.586175    6113 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:11:01.592874    6113 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:11:01.595995    6113 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:11:01.596032    6113 cni.go:84] Creating CNI manager for ""
	I0718 21:11:01.596064    6113 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:11:01.596070    6113 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:11:01.596107    6113 start.go:340] cluster config:
	{Name:test-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:11:01.599855    6113 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:01.607846    6113 out.go:177] * Starting "test-preload-828000" primary control-plane node in "test-preload-828000" cluster
	I0718 21:11:01.611847    6113 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0718 21:11:01.611950    6113 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/test-preload-828000/config.json ...
	I0718 21:11:01.611956    6113 cache.go:107] acquiring lock: {Name:mk538a76863935988285d11f5e65da707adf42e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:01.611978    6113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/test-preload-828000/config.json: {Name:mk886f783d1ea1b51a65a7e75acd9be2f2056ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:11:01.611969    6113 cache.go:107] acquiring lock: {Name:mk9de8f899701ac4da1817269d9eb60a215b736f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:01.611974    6113 cache.go:107] acquiring lock: {Name:mke47c070da492292bb135e70413ddc2076c62d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:01.612007    6113 cache.go:107] acquiring lock: {Name:mkcdae93b129606f0466c5b5f385cfcab61798be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:01.612144    6113 cache.go:107] acquiring lock: {Name:mkf3bb21396865ad0d85dbd6704b5d21603f8d7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:01.612194    6113 cache.go:107] acquiring lock: {Name:mk84615beedab71dd65842bcf99b495bea9244cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:01.612262    6113 cache.go:107] acquiring lock: {Name:mk3fa2943a8302511d4977798702a8d682926590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:01.612268    6113 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0718 21:11:01.612271    6113 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0718 21:11:01.612307    6113 start.go:360] acquireMachinesLock for test-preload-828000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:11:01.612385    6113 start.go:364] duration metric: took 59.791µs to acquireMachinesLock for "test-preload-828000"
	I0718 21:11:01.612423    6113 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:11:01.612454    6113 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:11:01.612438    6113 cache.go:107] acquiring lock: {Name:mkbe5a76790ff934b403766a75825665b5d7b208 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:11:01.612461    6113 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0718 21:11:01.612487    6113 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0718 21:11:01.612575    6113 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0718 21:11:01.612397    6113 start.go:93] Provisioning new machine with config: &{Name:test-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:11:01.612589    6113 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:11:01.612648    6113 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0718 21:11:01.627919    6113 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:11:01.631569    6113 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0718 21:11:01.631653    6113 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:11:01.631758    6113 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:11:01.634537    6113 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0718 21:11:01.634664    6113 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0718 21:11:01.634682    6113 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0718 21:11:01.634710    6113 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0718 21:11:01.634737    6113 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0718 21:11:01.647544    6113 start.go:159] libmachine.API.Create for "test-preload-828000" (driver="qemu2")
	I0718 21:11:01.647564    6113 client.go:168] LocalClient.Create starting
	I0718 21:11:01.647637    6113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:11:01.647669    6113 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:01.647678    6113 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:01.647720    6113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:11:01.647746    6113 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:01.647755    6113 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:01.648078    6113 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:11:01.776735    6113 main.go:141] libmachine: Creating SSH key...
	I0718 21:11:01.917671    6113 main.go:141] libmachine: Creating Disk image...
	I0718 21:11:01.917754    6113 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:11:01.917934    6113 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2
	I0718 21:11:01.927655    6113 main.go:141] libmachine: STDOUT: 
	I0718 21:11:01.927681    6113 main.go:141] libmachine: STDERR: 
	I0718 21:11:01.927777    6113 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2 +20000M
	I0718 21:11:01.937012    6113 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:11:01.937124    6113 main.go:141] libmachine: STDERR: 
	I0718 21:11:01.937137    6113 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2
	I0718 21:11:01.937141    6113 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:11:01.937152    6113 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:11:01.937176    6113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:94:bd:13:76:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2
	I0718 21:11:01.939167    6113 main.go:141] libmachine: STDOUT: 
	I0718 21:11:01.939183    6113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:11:01.939202    6113 client.go:171] duration metric: took 291.642708ms to LocalClient.Create
	W0718 21:11:02.080329    6113 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0718 21:11:02.080367    6113 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0718 21:11:02.106311    6113 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0718 21:11:02.108954    6113 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0718 21:11:02.133747    6113 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0718 21:11:02.157856    6113 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0718 21:11:02.228206    6113 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0718 21:11:02.256084    6113 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0718 21:11:02.361410    6113 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0718 21:11:02.361461    6113 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 749.469791ms
	I0718 21:11:02.361501    6113 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0718 21:11:02.611725    6113 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0718 21:11:02.611829    6113 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0718 21:11:02.886053    6113 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0718 21:11:02.886109    6113 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.2741875s
	I0718 21:11:02.886134    6113 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0718 21:11:03.939350    6113 start.go:128] duration metric: took 2.326797375s to createHost
	I0718 21:11:03.939414    6113 start.go:83] releasing machines lock for "test-preload-828000", held for 2.327086917s
	W0718 21:11:03.939473    6113 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:03.953601    6113 out.go:177] * Deleting "test-preload-828000" in qemu2 ...
	W0718 21:11:03.977199    6113 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:03.977229    6113 start.go:729] Will try again in 5 seconds ...
	I0718 21:11:04.447923    6113 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0718 21:11:04.447978    6113 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.835838125s
	I0718 21:11:04.448005    6113 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0718 21:11:05.490655    6113 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0718 21:11:05.490705    6113 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.878698291s
	I0718 21:11:05.490730    6113 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0718 21:11:06.084091    6113 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0718 21:11:06.084152    6113 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.472319708s
	I0718 21:11:06.084179    6113 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0718 21:11:06.795258    6113 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0718 21:11:06.795314    6113 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.183019958s
	I0718 21:11:06.795340    6113 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0718 21:11:07.258749    6113 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0718 21:11:07.258799    6113 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.647005667s
	I0718 21:11:07.258826    6113 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0718 21:11:08.977288    6113 start.go:360] acquireMachinesLock for test-preload-828000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:11:08.977738    6113 start.go:364] duration metric: took 367.417µs to acquireMachinesLock for "test-preload-828000"
	I0718 21:11:08.977842    6113 start.go:93] Provisioning new machine with config: &{Name:test-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:11:08.978084    6113 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:11:08.987699    6113 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:11:09.040210    6113 start.go:159] libmachine.API.Create for "test-preload-828000" (driver="qemu2")
	I0718 21:11:09.040264    6113 client.go:168] LocalClient.Create starting
	I0718 21:11:09.040399    6113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:11:09.040466    6113 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:09.040489    6113 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:09.040562    6113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:11:09.040606    6113 main.go:141] libmachine: Decoding PEM data...
	I0718 21:11:09.040622    6113 main.go:141] libmachine: Parsing certificate...
	I0718 21:11:09.041152    6113 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:11:09.178802    6113 main.go:141] libmachine: Creating SSH key...
	I0718 21:11:09.211529    6113 main.go:141] libmachine: Creating Disk image...
	I0718 21:11:09.211534    6113 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:11:09.211696    6113 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2
	I0718 21:11:09.221113    6113 main.go:141] libmachine: STDOUT: 
	I0718 21:11:09.221133    6113 main.go:141] libmachine: STDERR: 
	I0718 21:11:09.221178    6113 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2 +20000M
	I0718 21:11:09.229083    6113 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:11:09.229096    6113 main.go:141] libmachine: STDERR: 
	I0718 21:11:09.229115    6113 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2
	I0718 21:11:09.229119    6113 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:11:09.229132    6113 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:11:09.229177    6113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:c6:d9:bf:a0:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/test-preload-828000/disk.qcow2
	I0718 21:11:09.230934    6113 main.go:141] libmachine: STDOUT: 
	I0718 21:11:09.230947    6113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:11:09.230962    6113 client.go:171] duration metric: took 190.6985ms to LocalClient.Create
	I0718 21:11:10.905243    6113 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0718 21:11:10.905323    6113 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.293473459s
	I0718 21:11:10.905359    6113 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0718 21:11:10.905404    6113 cache.go:87] Successfully saved all images to host disk.
	I0718 21:11:11.233144    6113 start.go:128] duration metric: took 2.2550985s to createHost
	I0718 21:11:11.233196    6113 start.go:83] releasing machines lock for "test-preload-828000", held for 2.255499125s
	W0718 21:11:11.233481    6113 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-828000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-828000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:11:11.241037    6113 out.go:177] 
	W0718 21:11:11.244954    6113 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:11:11.244979    6113 out.go:239] * 
	* 
	W0718 21:11:11.247492    6113 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:11:11.253954    6113 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-18 21:11:11.271131 -0700 PDT m=+2781.358221542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-828000 -n test-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-828000 -n test-preload-828000: exit status 7 (67.151792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-828000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-828000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-828000
--- FAIL: TestPreload (9.92s)

                                                
                                    
x
+
TestScheduledStopUnix (9.96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-237000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-237000 --memory=2048 --driver=qemu2 : exit status 80 (9.81426925s)

                                                
                                                
-- stdout --
	* [scheduled-stop-237000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-237000" primary control-plane node in "scheduled-stop-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-237000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-237000" primary control-plane node in "scheduled-stop-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-18 21:11:21.227927 -0700 PDT m=+2791.315306042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-237000 -n scheduled-stop-237000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-237000 -n scheduled-stop-237000: exit status 7 (70.165666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-237000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-237000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-237000
--- FAIL: TestScheduledStopUnix (9.96s)

                                                
                                    
x
+
TestSkaffold (12.38s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3208478696 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3208478696 version: (1.062477459s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-174000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-174000 --memory=2600 --driver=qemu2 : exit status 80 (9.671115167s)

                                                
                                                
-- stdout --
	* [skaffold-174000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-174000" primary control-plane node in "skaffold-174000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-174000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-174000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-174000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-174000" primary control-plane node in "skaffold-174000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-174000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-174000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-18 21:11:33.609205 -0700 PDT m=+2803.696942292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-174000 -n skaffold-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-174000 -n skaffold-174000: exit status 7 (61.986292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-174000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-174000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-174000
--- FAIL: TestSkaffold (12.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (613.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2513131823 start -p running-upgrade-511000 --memory=2200 --vm-driver=qemu2 
E0718 21:13:16.035209    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2513131823 start -p running-upgrade-511000 --memory=2200 --vm-driver=qemu2 : (1m4.677636167s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-511000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0718 21:13:59.613238    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-511000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m33.46559775s)

                                                
                                                
-- stdout --
	* [running-upgrade-511000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-511000" primary control-plane node in "running-upgrade-511000" cluster
	* Updating the running qemu2 "running-upgrade-511000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:13:20.970591    6499 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:13:20.970718    6499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:13:20.970721    6499 out.go:304] Setting ErrFile to fd 2...
	I0718 21:13:20.970723    6499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:13:20.970852    6499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:13:20.971916    6499 out.go:298] Setting JSON to false
	I0718 21:13:20.988195    6499 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4368,"bootTime":1721358032,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:13:20.988262    6499 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:13:20.992163    6499 out.go:177] * [running-upgrade-511000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:13:20.999101    6499 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:13:20.999174    6499 notify.go:220] Checking for updates...
	I0718 21:13:21.006111    6499 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:13:21.009106    6499 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:13:21.012103    6499 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:13:21.015093    6499 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:13:21.018069    6499 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:13:21.021309    6499 config.go:182] Loaded profile config "running-upgrade-511000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:13:21.023986    6499 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0718 21:13:21.027118    6499 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:13:21.031151    6499 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 21:13:21.038029    6499 start.go:297] selected driver: qemu2
	I0718 21:13:21.038034    6499 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-511000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50316 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-511000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0718 21:13:21.038072    6499 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:13:21.040237    6499 cni.go:84] Creating CNI manager for ""
	I0718 21:13:21.040254    6499 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:13:21.040280    6499 start.go:340] cluster config:
	{Name:running-upgrade-511000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50316 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-511000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0718 21:13:21.040328    6499 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:13:21.046032    6499 out.go:177] * Starting "running-upgrade-511000" primary control-plane node in "running-upgrade-511000" cluster
	I0718 21:13:21.050038    6499 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0718 21:13:21.050058    6499 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0718 21:13:21.050065    6499 cache.go:56] Caching tarball of preloaded images
	I0718 21:13:21.050117    6499 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:13:21.050122    6499 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0718 21:13:21.050164    6499 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/config.json ...
	I0718 21:13:21.050549    6499 start.go:360] acquireMachinesLock for running-upgrade-511000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:13:21.050581    6499 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "running-upgrade-511000"
	I0718 21:13:21.050588    6499 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:13:21.050593    6499 fix.go:54] fixHost starting: 
	I0718 21:13:21.051149    6499 fix.go:112] recreateIfNeeded on running-upgrade-511000: state=Running err=<nil>
	W0718 21:13:21.051158    6499 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:13:21.066097    6499 out.go:177] * Updating the running qemu2 "running-upgrade-511000" VM ...
	I0718 21:13:21.073949    6499 machine.go:94] provisionDockerMachine start ...
	I0718 21:13:21.073989    6499 main.go:141] libmachine: Using SSH client type: native
	I0718 21:13:21.074109    6499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d6aa10] 0x102d6d270 <nil>  [] 0s} localhost 50284 <nil> <nil>}
	I0718 21:13:21.074113    6499 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 21:13:21.130340    6499 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-511000
	
	I0718 21:13:21.130359    6499 buildroot.go:166] provisioning hostname "running-upgrade-511000"
	I0718 21:13:21.130401    6499 main.go:141] libmachine: Using SSH client type: native
	I0718 21:13:21.130529    6499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d6aa10] 0x102d6d270 <nil>  [] 0s} localhost 50284 <nil> <nil>}
	I0718 21:13:21.130535    6499 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-511000 && echo "running-upgrade-511000" | sudo tee /etc/hostname
	I0718 21:13:21.185258    6499 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-511000
	
	I0718 21:13:21.185308    6499 main.go:141] libmachine: Using SSH client type: native
	I0718 21:13:21.185416    6499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d6aa10] 0x102d6d270 <nil>  [] 0s} localhost 50284 <nil> <nil>}
	I0718 21:13:21.185424    6499 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-511000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-511000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-511000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 21:13:21.238605    6499 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 21:13:21.238615    6499 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 21:13:21.238628    6499 buildroot.go:174] setting up certificates
	I0718 21:13:21.238633    6499 provision.go:84] configureAuth start
	I0718 21:13:21.238636    6499 provision.go:143] copyHostCerts
	I0718 21:13:21.238702    6499 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 21:13:21.238709    6499 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 21:13:21.238836    6499 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 21:13:21.239018    6499 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 21:13:21.239021    6499 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 21:13:21.239074    6499 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 21:13:21.239171    6499 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 21:13:21.239174    6499 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 21:13:21.239217    6499 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 21:13:21.239305    6499 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-511000 san=[127.0.0.1 localhost minikube running-upgrade-511000]
	I0718 21:13:21.386652    6499 provision.go:177] copyRemoteCerts
	I0718 21:13:21.386700    6499 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 21:13:21.386709    6499 sshutil.go:53] new ssh client: &{IP:localhost Port:50284 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/running-upgrade-511000/id_rsa Username:docker}
	I0718 21:13:21.414920    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 21:13:21.421783    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0718 21:13:21.428835    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 21:13:21.435994    6499 provision.go:87] duration metric: took 197.358542ms to configureAuth
	I0718 21:13:21.436003    6499 buildroot.go:189] setting minikube options for container-runtime
	I0718 21:13:21.436108    6499 config.go:182] Loaded profile config "running-upgrade-511000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:13:21.436138    6499 main.go:141] libmachine: Using SSH client type: native
	I0718 21:13:21.436221    6499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d6aa10] 0x102d6d270 <nil>  [] 0s} localhost 50284 <nil> <nil>}
	I0718 21:13:21.436227    6499 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 21:13:21.490880    6499 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 21:13:21.490888    6499 buildroot.go:70] root file system type: tmpfs
	I0718 21:13:21.490938    6499 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 21:13:21.490990    6499 main.go:141] libmachine: Using SSH client type: native
	I0718 21:13:21.491095    6499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d6aa10] 0x102d6d270 <nil>  [] 0s} localhost 50284 <nil> <nil>}
	I0718 21:13:21.491134    6499 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 21:13:21.544916    6499 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 21:13:21.544976    6499 main.go:141] libmachine: Using SSH client type: native
	I0718 21:13:21.545093    6499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d6aa10] 0x102d6d270 <nil>  [] 0s} localhost 50284 <nil> <nil>}
	I0718 21:13:21.545101    6499 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 21:13:21.596784    6499 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 21:13:21.596797    6499 machine.go:97] duration metric: took 522.858541ms to provisionDockerMachine
	I0718 21:13:21.596803    6499 start.go:293] postStartSetup for "running-upgrade-511000" (driver="qemu2")
	I0718 21:13:21.596810    6499 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 21:13:21.596876    6499 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 21:13:21.596885    6499 sshutil.go:53] new ssh client: &{IP:localhost Port:50284 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/running-upgrade-511000/id_rsa Username:docker}
	I0718 21:13:21.626139    6499 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 21:13:21.627455    6499 info.go:137] Remote host: Buildroot 2021.02.12
	I0718 21:13:21.627463    6499 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 21:13:21.627547    6499 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 21:13:21.627675    6499 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 21:13:21.627808    6499 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 21:13:21.630516    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 21:13:21.637318    6499 start.go:296] duration metric: took 40.510583ms for postStartSetup
	I0718 21:13:21.637332    6499 fix.go:56] duration metric: took 586.757084ms for fixHost
	I0718 21:13:21.637365    6499 main.go:141] libmachine: Using SSH client type: native
	I0718 21:13:21.637469    6499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d6aa10] 0x102d6d270 <nil>  [] 0s} localhost 50284 <nil> <nil>}
	I0718 21:13:21.637476    6499 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0718 21:13:21.690336    6499 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362401.909428307
	
	I0718 21:13:21.690343    6499 fix.go:216] guest clock: 1721362401.909428307
	I0718 21:13:21.690347    6499 fix.go:229] Guest: 2024-07-18 21:13:21.909428307 -0700 PDT Remote: 2024-07-18 21:13:21.637334 -0700 PDT m=+0.686863626 (delta=272.094307ms)
	I0718 21:13:21.690359    6499 fix.go:200] guest clock delta is within tolerance: 272.094307ms
	I0718 21:13:21.690362    6499 start.go:83] releasing machines lock for "running-upgrade-511000", held for 639.795625ms
	I0718 21:13:21.690417    6499 ssh_runner.go:195] Run: cat /version.json
	I0718 21:13:21.690419    6499 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 21:13:21.690424    6499 sshutil.go:53] new ssh client: &{IP:localhost Port:50284 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/running-upgrade-511000/id_rsa Username:docker}
	I0718 21:13:21.690436    6499 sshutil.go:53] new ssh client: &{IP:localhost Port:50284 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/running-upgrade-511000/id_rsa Username:docker}
	W0718 21:13:21.690954    6499 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50284: connect: connection refused
	I0718 21:13:21.690976    6499 retry.go:31] will retry after 352.565369ms: dial tcp [::1]:50284: connect: connection refused
	W0718 21:13:22.095658    6499 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0718 21:13:22.095843    6499 ssh_runner.go:195] Run: systemctl --version
	I0718 21:13:22.099723    6499 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0718 21:13:22.103569    6499 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 21:13:22.103629    6499 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0718 21:13:22.109261    6499 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0718 21:13:22.116896    6499 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 21:13:22.116909    6499 start.go:495] detecting cgroup driver to use...
	I0718 21:13:22.117074    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:13:22.125503    6499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0718 21:13:22.129415    6499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 21:13:22.133267    6499 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 21:13:22.133302    6499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 21:13:22.137156    6499 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:13:22.140741    6499 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 21:13:22.144304    6499 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:13:22.152112    6499 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 21:13:22.155429    6499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 21:13:22.158233    6499 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 21:13:22.160967    6499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 21:13:22.164431    6499 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 21:13:22.167210    6499 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 21:13:22.169713    6499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:13:22.271205    6499 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 21:13:22.277679    6499 start.go:495] detecting cgroup driver to use...
	I0718 21:13:22.277740    6499 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 21:13:22.284449    6499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:13:22.290058    6499 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 21:13:22.296773    6499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:13:22.301081    6499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:13:22.305791    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:13:22.311246    6499 ssh_runner.go:195] Run: which cri-dockerd
	I0718 21:13:22.312550    6499 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 21:13:22.315317    6499 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 21:13:22.320405    6499 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 21:13:22.413288    6499 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 21:13:22.503886    6499 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 21:13:22.503938    6499 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 21:13:22.509148    6499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:13:22.601094    6499 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 21:13:36.176581    6499 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.575863916s)
	I0718 21:13:36.176653    6499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 21:13:36.180995    6499 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0718 21:13:36.187728    6499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 21:13:36.192701    6499 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 21:13:36.268315    6499 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 21:13:36.339855    6499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:13:36.420832    6499 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 21:13:36.427102    6499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 21:13:36.431584    6499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:13:36.518575    6499 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 21:13:36.556204    6499 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 21:13:36.556276    6499 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 21:13:36.558388    6499 start.go:563] Will wait 60s for crictl version
	I0718 21:13:36.558424    6499 ssh_runner.go:195] Run: which crictl
	I0718 21:13:36.560026    6499 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 21:13:36.572784    6499 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0718 21:13:36.572846    6499 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 21:13:36.584994    6499 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 21:13:36.603194    6499 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0718 21:13:36.603315    6499 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0718 21:13:36.604643    6499 kubeadm.go:883] updating cluster {Name:running-upgrade-511000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50316 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-511000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0718 21:13:36.604683    6499 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0718 21:13:36.604720    6499 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 21:13:36.614900    6499 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 21:13:36.614908    6499 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0718 21:13:36.614963    6499 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 21:13:36.617907    6499 ssh_runner.go:195] Run: which lz4
	I0718 21:13:36.619096    6499 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0718 21:13:36.620298    6499 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0718 21:13:36.620308    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0718 21:13:37.535163    6499 docker.go:649] duration metric: took 916.120958ms to copy over tarball
	I0718 21:13:37.535226    6499 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0718 21:13:38.735087    6499 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.199881375s)
	I0718 21:13:38.735101    6499 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0718 21:13:38.750792    6499 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 21:13:38.753832    6499 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0718 21:13:38.758816    6499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:13:38.837335    6499 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 21:13:40.045465    6499 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.208147542s)
	I0718 21:13:40.045554    6499 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 21:13:40.064612    6499 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 21:13:40.064621    6499 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0718 21:13:40.064626    6499 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0718 21:13:40.070006    6499 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:13:40.072485    6499 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:13:40.074233    6499 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0718 21:13:40.074293    6499 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:13:40.075953    6499 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:13:40.076266    6499 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0718 21:13:40.077401    6499 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0718 21:13:40.077509    6499 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:13:40.078971    6499 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0718 21:13:40.079004    6499 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:13:40.080363    6499 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:13:40.080388    6499 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:13:40.081410    6499 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:13:40.081449    6499 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:13:40.082721    6499 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:13:40.083530    6499 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:13:40.439175    6499 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0718 21:13:40.449763    6499 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0718 21:13:40.449794    6499 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0718 21:13:40.449844    6499 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0718 21:13:40.460652    6499 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0718 21:13:40.460757    6499 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0718 21:13:40.462622    6499 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0718 21:13:40.462633    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0718 21:13:40.471029    6499 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0718 21:13:40.471035    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0718 21:13:40.478796    6499 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:13:40.485219    6499 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0718 21:13:40.506756    6499 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0718 21:13:40.506802    6499 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0718 21:13:40.506820    6499 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:13:40.506877    6499 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:13:40.507572    6499 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0718 21:13:40.507584    6499 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0718 21:13:40.507608    6499 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0718 21:13:40.521384    6499 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0718 21:13:40.523925    6499 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0718 21:13:40.524025    6499 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	W0718 21:13:40.524287    6499 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0718 21:13:40.524392    6499 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:13:40.525908    6499 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0718 21:13:40.525921    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0718 21:13:40.530340    6499 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:13:40.534763    6499 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:13:40.539529    6499 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0718 21:13:40.539552    6499 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:13:40.539610    6499 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:13:40.574907    6499 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0718 21:13:40.574915    6499 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0718 21:13:40.574931    6499 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:13:40.574950    6499 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0718 21:13:40.574933    6499 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:13:40.574987    6499 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:13:40.575042    6499 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:13:40.575047    6499 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0718 21:13:40.575524    6499 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:13:40.627588    6499 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0718 21:13:40.627599    6499 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0718 21:13:40.627628    6499 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0718 21:13:40.627641    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0718 21:13:40.627666    6499 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0718 21:13:40.627681    6499 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:13:40.627726    6499 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:13:40.660635    6499 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0718 21:13:40.728322    6499 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0718 21:13:40.728339    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0718 21:13:40.833423    6499 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0718 21:13:40.833538    6499 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:13:40.845900    6499 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0718 21:13:40.872016    6499 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0718 21:13:40.872040    6499 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:13:40.872097    6499 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:13:40.872968    6499 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0718 21:13:40.872974    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0718 21:13:41.191799    6499 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0718 21:13:41.191825    6499 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0718 21:13:41.192042    6499 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0718 21:13:41.195954    6499 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0718 21:13:41.195988    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0718 21:13:41.246603    6499 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0718 21:13:41.246624    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0718 21:13:41.479396    6499 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0718 21:13:41.479436    6499 cache_images.go:92] duration metric: took 1.414845583s to LoadCachedImages
	W0718 21:13:41.479482    6499 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0718 21:13:41.479489    6499 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0718 21:13:41.479533    6499 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-511000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-511000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 21:13:41.479602    6499 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 21:13:41.498704    6499 cni.go:84] Creating CNI manager for ""
	I0718 21:13:41.498717    6499 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:13:41.498721    6499 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0718 21:13:41.498734    6499 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-511000 NodeName:running-upgrade-511000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 21:13:41.498798    6499 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-511000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 21:13:41.498861    6499 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0718 21:13:41.501830    6499 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 21:13:41.501856    6499 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0718 21:13:41.504694    6499 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0718 21:13:41.509820    6499 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 21:13:41.514529    6499 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0718 21:13:41.519309    6499 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0718 21:13:41.520576    6499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:13:41.604169    6499 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 21:13:41.609695    6499 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000 for IP: 10.0.2.15
	I0718 21:13:41.609704    6499 certs.go:194] generating shared ca certs ...
	I0718 21:13:41.609713    6499 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:13:41.609867    6499 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 21:13:41.609921    6499 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 21:13:41.609929    6499 certs.go:256] generating profile certs ...
	I0718 21:13:41.609986    6499 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/client.key
	I0718 21:13:41.610002    6499 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.key.0bb360fb
	I0718 21:13:41.610017    6499 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.crt.0bb360fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0718 21:13:41.688117    6499 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.crt.0bb360fb ...
	I0718 21:13:41.688122    6499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.crt.0bb360fb: {Name:mk5d92c796d90bc4508cf8bdab5cc43cec076772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:13:41.691241    6499 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.key.0bb360fb ...
	I0718 21:13:41.691246    6499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.key.0bb360fb: {Name:mke31e08f8733f0f66bd90092815f99544f6bebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:13:41.691378    6499 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.crt.0bb360fb -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.crt
	I0718 21:13:41.693956    6499 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.key.0bb360fb -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.key
	I0718 21:13:41.694147    6499 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/proxy-client.key
	I0718 21:13:41.694290    6499 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 21:13:41.694319    6499 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 21:13:41.694324    6499 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 21:13:41.694343    6499 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 21:13:41.694373    6499 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 21:13:41.694391    6499 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 21:13:41.694432    6499 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 21:13:41.694738    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 21:13:41.702157    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 21:13:41.709094    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 21:13:41.716559    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 21:13:41.723798    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0718 21:13:41.730591    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 21:13:41.737215    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 21:13:41.744416    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0718 21:13:41.751636    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 21:13:41.758191    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 21:13:41.764815    6499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 21:13:41.771917    6499 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 21:13:41.777017    6499 ssh_runner.go:195] Run: openssl version
	I0718 21:13:41.779005    6499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 21:13:41.782081    6499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 21:13:41.783621    6499 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 21:13:41.783641    6499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 21:13:41.785323    6499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 21:13:41.788544    6499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 21:13:41.792193    6499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:13:41.793817    6499 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:13:41.793842    6499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:13:41.795738    6499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 21:13:41.798442    6499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 21:13:41.801424    6499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 21:13:41.802797    6499 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 21:13:41.802813    6499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 21:13:41.804680    6499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 21:13:41.807454    6499 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 21:13:41.808932    6499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0718 21:13:41.810712    6499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0718 21:13:41.812507    6499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0718 21:13:41.814520    6499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0718 21:13:41.816572    6499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0718 21:13:41.818483    6499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0718 21:13:41.820348    6499 kubeadm.go:392] StartCluster: {Name:running-upgrade-511000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50316 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-511000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0718 21:13:41.820421    6499 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 21:13:41.830337    6499 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 21:13:41.833465    6499 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0718 21:13:41.833480    6499 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0718 21:13:41.833502    6499 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0718 21:13:41.836479    6499 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0718 21:13:41.836714    6499 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-511000" does not appear in /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:13:41.836765    6499 kubeconfig.go:62] /Users/jenkins/minikube-integration/19302-1213/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-511000" cluster setting kubeconfig missing "running-upgrade-511000" context setting]
	I0718 21:13:41.836892    6499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:13:41.837544    6499 kapi.go:59] client config for running-upgrade-511000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040ff790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 21:13:41.837874    6499 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0718 21:13:41.840668    6499 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-511000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0718 21:13:41.840678    6499 kubeadm.go:1160] stopping kube-system containers ...
	I0718 21:13:41.840718    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 21:13:41.856232    6499 docker.go:483] Stopping containers: [e26477068994 c9b723710195 90d9e9c55b43 74d5e32e27cc b1eea5624642 eb5c28307463 a1b6e73e7e71 3290be9bb42d bc0934f9b595 4d09a98168ad 1da56c90fd8c 6346a4f31ee0]
	I0718 21:13:41.856294    6499 ssh_runner.go:195] Run: docker stop e26477068994 c9b723710195 90d9e9c55b43 74d5e32e27cc b1eea5624642 eb5c28307463 a1b6e73e7e71 3290be9bb42d bc0934f9b595 4d09a98168ad 1da56c90fd8c 6346a4f31ee0
	I0718 21:13:41.867310    6499 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0718 21:13:41.967487    6499 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 21:13:41.971183    6499 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jul 19 04:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 19 04:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 19 04:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 19 04:12 /etc/kubernetes/scheduler.conf
	
	I0718 21:13:41.971220    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/admin.conf
	I0718 21:13:41.979878    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0718 21:13:41.979933    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 21:13:41.989730    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/kubelet.conf
	I0718 21:13:41.992687    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0718 21:13:41.992726    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 21:13:41.995457    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/controller-manager.conf
	I0718 21:13:41.998154    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0718 21:13:41.998184    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 21:13:42.000817    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/scheduler.conf
	I0718 21:13:42.003741    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0718 21:13:42.003770    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 21:13:42.006725    6499 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 21:13:42.011566    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:13:42.042908    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:13:42.571708    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:13:42.776204    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:13:42.800927    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:13:42.824176    6499 api_server.go:52] waiting for apiserver process to appear ...
	I0718 21:13:42.824258    6499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:13:43.326630    6499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:13:43.826310    6499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:13:43.830925    6499 api_server.go:72] duration metric: took 1.006778375s to wait for apiserver process to appear ...
	I0718 21:13:43.830937    6499 api_server.go:88] waiting for apiserver healthz status ...
	I0718 21:13:43.830947    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:13:48.831022    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:13:48.831070    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:13:53.831295    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:13:53.831407    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:13:58.832654    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:13:58.832705    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:14:03.832983    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:14:03.833043    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:14:08.833430    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:14:08.833529    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:14:13.834432    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:14:13.834518    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:14:18.835648    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:14:18.835733    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:14:23.837368    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:14:23.837499    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:14:28.839630    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:14:28.839708    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:14:33.840439    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:14:33.840511    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:14:38.843083    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:14:38.843151    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:14:43.845689    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:14:43.846114    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:14:43.887344    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:14:43.887493    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:14:43.909439    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:14:43.909554    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:14:43.924991    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:14:43.925097    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:14:43.937643    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:14:43.937720    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:14:43.948825    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:14:43.948893    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:14:43.959454    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:14:43.959538    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:14:43.969533    6499 logs.go:276] 0 containers: []
	W0718 21:14:43.969545    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:14:43.969598    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:14:43.984404    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:14:43.984421    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:14:43.984427    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:14:44.007858    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:14:44.007870    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:14:44.023082    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:14:44.023097    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:14:44.027561    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:14:44.027568    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:14:44.041786    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:14:44.041798    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:14:44.053004    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:14:44.053013    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:14:44.070009    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:14:44.070022    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:14:44.085696    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:14:44.085713    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:14:44.101255    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:14:44.101268    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:14:44.115213    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:14:44.115228    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:14:44.129339    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:14:44.129350    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:14:44.140724    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:14:44.140734    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:14:44.152522    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:14:44.152537    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:14:44.163944    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:14:44.163952    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:14:44.188716    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:14:44.188723    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:14:44.223679    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:14:44.223687    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:14:44.292199    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:14:44.292208    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:14:46.805724    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:14:51.806862    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:14:51.807304    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:14:51.840952    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:14:51.841115    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:14:51.864896    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:14:51.865004    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:14:51.879347    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:14:51.879430    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:14:51.892934    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:14:51.893004    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:14:51.905548    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:14:51.905624    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:14:51.916492    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:14:51.916558    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:14:51.926609    6499 logs.go:276] 0 containers: []
	W0718 21:14:51.926620    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:14:51.926673    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:14:51.936903    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:14:51.936922    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:14:51.936927    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:14:51.954506    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:14:51.954516    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:14:51.966118    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:14:51.966128    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:14:51.990977    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:14:51.990986    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:14:52.002273    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:14:52.002284    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:14:52.013891    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:14:52.013904    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:14:52.018384    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:14:52.018392    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:14:52.032739    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:14:52.032751    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:14:52.047556    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:14:52.047568    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:14:52.059135    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:14:52.059147    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:14:52.097121    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:14:52.097128    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:14:52.111531    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:14:52.111543    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:14:52.123693    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:14:52.123704    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:14:52.136060    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:14:52.136070    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:14:52.174190    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:14:52.174202    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:14:52.187813    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:14:52.187822    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:14:52.199467    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:14:52.199479    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:14:54.716437    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:14:59.718795    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:14:59.719178    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:14:59.760179    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:14:59.760362    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:14:59.782817    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:14:59.782908    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:14:59.797331    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:14:59.797403    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:14:59.809276    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:14:59.809354    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:14:59.820235    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:14:59.820297    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:14:59.830918    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:14:59.831008    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:14:59.841360    6499 logs.go:276] 0 containers: []
	W0718 21:14:59.841371    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:14:59.841433    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:14:59.851799    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:14:59.851818    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:14:59.851824    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:14:59.863369    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:14:59.863383    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:14:59.880649    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:14:59.880659    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:14:59.907073    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:14:59.907081    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:14:59.918874    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:14:59.918886    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:14:59.955695    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:14:59.955702    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:14:59.969815    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:14:59.969826    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:14:59.985552    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:14:59.985563    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:14:59.996583    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:14:59.996596    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:15:00.011340    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:15:00.011351    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:15:00.022772    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:15:00.022784    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:15:00.040103    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:15:00.040114    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:15:00.050969    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:15:00.050981    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:15:00.055350    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:15:00.055358    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:15:00.091548    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:15:00.091560    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:15:00.102971    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:15:00.102984    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:15:00.116366    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:15:00.116379    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:15:02.632825    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:15:07.635516    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:15:07.635849    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:15:07.675683    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:15:07.675827    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:15:07.694786    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:15:07.694881    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:15:07.708740    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:15:07.708810    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:15:07.720970    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:15:07.721041    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:15:07.731530    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:15:07.731590    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:15:07.741961    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:15:07.742016    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:15:07.755398    6499 logs.go:276] 0 containers: []
	W0718 21:15:07.755408    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:15:07.755468    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:15:07.765908    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:15:07.765926    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:15:07.765932    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:15:07.770431    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:15:07.770439    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:15:07.789169    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:15:07.789180    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:15:07.802907    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:15:07.802919    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:15:07.815188    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:15:07.815200    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:15:07.836952    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:15:07.836965    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:15:07.874968    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:15:07.874976    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:15:07.891579    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:15:07.891595    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:15:07.908950    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:15:07.908961    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:15:07.920471    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:15:07.920482    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:15:07.931939    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:15:07.931950    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:15:07.956276    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:15:07.956283    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:15:07.968218    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:15:07.968230    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:15:08.003024    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:15:08.003039    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:15:08.014112    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:15:08.014121    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:15:08.029100    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:15:08.029110    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:15:08.042738    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:15:08.042751    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:15:10.559311    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:15:15.560317    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:15:15.561116    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:15:15.601229    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:15:15.601725    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:15:15.623031    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:15:15.623133    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:15:15.638172    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:15:15.638249    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:15:15.650759    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:15:15.650836    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:15:15.662229    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:15:15.662303    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:15:15.673222    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:15:15.673283    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:15:15.688293    6499 logs.go:276] 0 containers: []
	W0718 21:15:15.688304    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:15:15.688358    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:15:15.702774    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:15:15.702790    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:15:15.702796    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:15:15.713949    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:15:15.713960    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:15:15.724588    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:15:15.724599    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:15:15.736456    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:15:15.736468    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:15:15.741082    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:15:15.741090    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:15:15.776095    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:15:15.776109    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:15:15.789640    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:15:15.789653    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:15:15.800584    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:15:15.800595    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:15:15.826420    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:15:15.826427    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:15:15.863711    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:15:15.863720    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:15:15.881797    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:15:15.881809    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:15:15.893846    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:15:15.893859    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:15:15.905050    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:15:15.905061    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:15:15.919512    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:15:15.919523    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:15:15.933810    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:15:15.933820    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:15:15.948900    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:15:15.948911    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:15:15.960220    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:15:15.960233    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:15:18.479832    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:15:23.482613    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:15:23.483038    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:15:23.521566    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:15:23.521701    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:15:23.543760    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:15:23.543859    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:15:23.558996    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:15:23.559070    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:15:23.571729    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:15:23.571797    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:15:23.582767    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:15:23.582836    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:15:23.597942    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:15:23.598009    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:15:23.608684    6499 logs.go:276] 0 containers: []
	W0718 21:15:23.608698    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:15:23.608758    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:15:23.619711    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:15:23.619734    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:15:23.619739    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:15:23.634710    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:15:23.634721    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:15:23.646664    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:15:23.646674    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:15:23.672336    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:15:23.672348    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:15:23.729611    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:15:23.729624    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:15:23.741536    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:15:23.741548    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:15:23.753491    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:15:23.753503    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:15:23.758139    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:15:23.758148    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:15:23.771361    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:15:23.771372    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:15:23.789517    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:15:23.789531    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:15:23.807822    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:15:23.807832    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:15:23.820047    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:15:23.820061    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:15:23.831235    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:15:23.831246    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:15:23.868080    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:15:23.868087    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:15:23.882718    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:15:23.882728    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:15:23.896714    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:15:23.896723    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:15:23.908093    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:15:23.908103    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:15:26.421794    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:15:31.424341    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:15:31.424728    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:15:31.459094    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:15:31.459233    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:15:31.479683    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:15:31.479794    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:15:31.493978    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:15:31.494051    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:15:31.506129    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:15:31.506202    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:15:31.516926    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:15:31.516986    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:15:31.527346    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:15:31.527408    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:15:31.537159    6499 logs.go:276] 0 containers: []
	W0718 21:15:31.537168    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:15:31.537218    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:15:31.547556    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:15:31.547574    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:15:31.547579    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:15:31.559508    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:15:31.559520    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:15:31.571012    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:15:31.571026    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:15:31.582706    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:15:31.582720    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:15:31.608149    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:15:31.608158    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:15:31.621218    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:15:31.621228    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:15:31.625964    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:15:31.625971    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:15:31.639693    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:15:31.639706    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:15:31.653364    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:15:31.653375    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:15:31.668196    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:15:31.668207    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:15:31.680056    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:15:31.680068    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:15:31.715306    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:15:31.715313    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:15:31.730567    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:15:31.730577    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:15:31.742504    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:15:31.742515    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:15:31.777174    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:15:31.777185    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:15:31.795201    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:15:31.795213    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:15:31.807204    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:15:31.807221    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:15:34.321082    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:15:39.323300    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:15:39.323697    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:15:39.366173    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:15:39.366311    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:15:39.386244    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:15:39.386337    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:15:39.401658    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:15:39.401729    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:15:39.415699    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:15:39.415773    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:15:39.426367    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:15:39.426435    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:15:39.441578    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:15:39.441644    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:15:39.452629    6499 logs.go:276] 0 containers: []
	W0718 21:15:39.452639    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:15:39.452692    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:15:39.463494    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:15:39.463514    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:15:39.463520    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:15:39.478104    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:15:39.478113    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:15:39.490268    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:15:39.490278    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:15:39.505694    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:15:39.505704    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:15:39.518166    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:15:39.518177    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:15:39.530214    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:15:39.530223    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:15:39.534555    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:15:39.534560    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:15:39.549352    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:15:39.549370    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:15:39.567522    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:15:39.567534    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:15:39.594203    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:15:39.594211    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:15:39.605774    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:15:39.605785    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:15:39.643121    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:15:39.643131    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:15:39.680934    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:15:39.680946    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:15:39.694870    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:15:39.694886    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:15:39.706578    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:15:39.706587    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:15:39.728012    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:15:39.728024    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:15:39.742586    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:15:39.742599    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:15:42.257581    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:15:47.259761    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:15:47.260146    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:15:47.291665    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:15:47.291802    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:15:47.311492    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:15:47.311585    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:15:47.325778    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:15:47.325861    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:15:47.337744    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:15:47.337816    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:15:47.348382    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:15:47.348452    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:15:47.367441    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:15:47.367516    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:15:47.378288    6499 logs.go:276] 0 containers: []
	W0718 21:15:47.378299    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:15:47.378358    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:15:47.388972    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:15:47.388991    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:15:47.388996    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:15:47.403431    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:15:47.403442    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:15:47.415396    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:15:47.415407    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:15:47.427146    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:15:47.427159    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:15:47.431571    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:15:47.431577    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:15:47.450562    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:15:47.450572    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:15:47.462617    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:15:47.462630    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:15:47.474723    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:15:47.474734    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:15:47.512746    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:15:47.512754    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:15:47.549856    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:15:47.549867    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:15:47.569790    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:15:47.569801    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:15:47.582019    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:15:47.582032    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:15:47.607786    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:15:47.607794    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:15:47.622026    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:15:47.622036    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:15:47.637761    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:15:47.637774    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:15:47.653035    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:15:47.653046    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:15:47.666652    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:15:47.666665    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:15:50.181222    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:15:55.181889    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:15:55.182020    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:15:55.197275    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:15:55.197351    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:15:55.209581    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:15:55.209655    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:15:55.221307    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:15:55.221387    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:15:55.233372    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:15:55.233448    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:15:55.246068    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:15:55.246143    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:15:55.258019    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:15:55.258094    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:15:55.269689    6499 logs.go:276] 0 containers: []
	W0718 21:15:55.269701    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:15:55.269769    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:15:55.283875    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:15:55.283894    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:15:55.283900    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:15:55.289341    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:15:55.289355    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:15:55.306538    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:15:55.306550    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:15:55.321619    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:15:55.321632    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:15:55.343145    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:15:55.343158    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:15:55.383317    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:15:55.383337    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:15:55.426264    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:15:55.426277    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:15:55.441487    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:15:55.441498    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:15:55.454052    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:15:55.454064    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:15:55.470381    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:15:55.470394    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:15:55.483671    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:15:55.483685    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:15:55.496656    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:15:55.496668    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:15:55.510090    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:15:55.510102    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:15:55.536935    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:15:55.536954    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:15:55.552219    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:15:55.552234    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:15:55.568259    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:15:55.568273    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:15:55.581692    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:15:55.581703    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:15:58.097485    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:03.099563    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:03.099708    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:03.120409    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:03.120489    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:03.132820    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:03.132889    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:03.143308    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:03.143380    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:03.154329    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:03.154400    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:03.164855    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:03.164923    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:03.176797    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:03.176872    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:03.189186    6499 logs.go:276] 0 containers: []
	W0718 21:16:03.189199    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:03.189262    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:03.200789    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:03.200835    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:03.200901    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:03.213790    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:03.213801    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:03.218473    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:03.218482    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:03.232115    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:03.232128    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:03.244345    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:03.244357    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:03.262450    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:03.262461    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:03.287687    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:03.287693    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:03.300174    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:03.300188    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:03.311449    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:03.311459    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:03.326758    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:03.326774    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:03.340585    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:03.340597    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:03.354295    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:03.354307    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:03.392356    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:03.392368    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:03.427575    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:03.427590    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:03.441939    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:03.441952    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:03.455404    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:03.455418    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:03.469977    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:03.469991    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:05.983544    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:10.986064    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:10.986218    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:10.996839    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:10.996912    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:11.008743    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:11.008816    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:11.019467    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:11.019538    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:11.030100    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:11.030174    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:11.040908    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:11.040986    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:11.051553    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:11.051632    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:11.062293    6499 logs.go:276] 0 containers: []
	W0718 21:16:11.062305    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:11.062358    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:11.073215    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:11.073233    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:11.073239    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:11.077740    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:11.077746    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:11.090919    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:11.090930    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:11.108840    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:11.108854    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:11.124663    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:11.124679    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:11.139182    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:11.139195    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:11.177521    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:11.177534    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:11.193159    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:11.193171    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:11.205113    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:11.205125    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:11.229281    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:11.229294    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:11.255578    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:11.255592    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:11.295377    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:11.295393    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:11.309786    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:11.309800    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:11.322112    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:11.322128    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:11.333347    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:11.333359    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:11.350979    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:11.350990    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:11.364599    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:11.364612    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:13.879240    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:18.879784    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:18.880121    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:18.913290    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:18.913422    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:18.938415    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:18.938507    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:18.950712    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:18.950777    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:18.962283    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:18.962364    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:18.976375    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:18.976441    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:18.988292    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:18.988362    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:18.999219    6499 logs.go:276] 0 containers: []
	W0718 21:16:18.999235    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:18.999296    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:19.010243    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:19.010264    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:19.010269    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:19.022020    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:19.022034    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:19.033458    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:19.033469    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:19.037852    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:19.037861    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:19.072758    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:19.072769    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:19.086912    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:19.086922    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:19.105897    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:19.105907    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:19.124486    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:19.124498    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:19.138091    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:19.138102    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:19.173663    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:19.173672    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:19.188178    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:19.188189    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:19.203256    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:19.203267    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:19.214587    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:19.214597    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:19.238101    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:19.238109    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:19.251868    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:19.251880    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:19.264218    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:19.264228    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:19.276272    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:19.276287    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:21.790259    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:26.792465    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:26.792613    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:26.806550    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:26.806610    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:26.818091    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:26.818159    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:26.828347    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:26.828399    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:26.838427    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:26.838490    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:26.848504    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:26.848567    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:26.859199    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:26.859265    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:26.869016    6499 logs.go:276] 0 containers: []
	W0718 21:16:26.869033    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:26.869085    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:26.885440    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:26.885460    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:26.885467    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:26.899229    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:26.899239    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:26.917127    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:26.917137    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:26.940079    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:26.940088    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:26.976234    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:26.976247    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:26.989836    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:26.989848    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:27.001625    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:27.001638    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:27.016715    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:27.016729    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:27.028076    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:27.028087    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:27.032585    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:27.032594    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:27.047247    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:27.047257    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:27.058781    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:27.058790    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:27.096321    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:27.096329    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:27.110405    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:27.110418    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:27.124839    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:27.124850    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:27.135905    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:27.135915    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:27.147307    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:27.147317    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:29.661422    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:34.663925    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:34.664068    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:34.680817    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:34.680892    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:34.691857    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:34.691933    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:34.702623    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:34.702700    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:34.713570    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:34.713642    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:34.724157    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:34.724219    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:34.735187    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:34.735256    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:34.745496    6499 logs.go:276] 0 containers: []
	W0718 21:16:34.745507    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:34.745563    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:34.756332    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:34.756354    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:34.756361    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:34.768444    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:34.768455    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:34.802852    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:34.802862    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:34.815463    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:34.815474    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:34.827645    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:34.827656    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:34.839541    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:34.839551    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:34.857132    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:34.857142    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:34.895503    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:34.895514    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:34.911775    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:34.911785    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:34.926796    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:34.926806    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:34.942366    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:34.942375    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:34.947377    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:34.947385    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:34.962797    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:34.962807    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:34.974568    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:34.974580    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:34.989494    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:34.989503    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:35.004075    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:35.004089    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:35.015675    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:35.015687    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:37.542080    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:42.544251    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:42.544675    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:42.585744    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:42.585834    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:42.603732    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:42.603813    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:42.621834    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:42.621894    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:42.637497    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:42.637567    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:42.657592    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:42.657663    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:42.668357    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:42.668421    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:42.678603    6499 logs.go:276] 0 containers: []
	W0718 21:16:42.678613    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:42.678668    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:42.695318    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:42.695336    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:42.695341    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:42.718593    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:42.718600    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:42.754037    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:42.754050    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:42.790918    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:42.790933    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:42.805143    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:42.805158    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:42.816144    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:42.816154    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:42.828097    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:42.828111    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:42.843129    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:42.843141    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:42.860591    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:42.860604    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:42.873510    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:42.873522    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:42.889897    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:42.889913    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:42.904281    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:42.904297    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:42.915967    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:42.915978    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:42.927171    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:42.927182    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:42.941323    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:42.941336    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:42.945508    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:42.945514    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:42.959383    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:42.959395    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:45.475760    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:50.478366    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:50.478547    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:50.501736    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:50.501816    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:50.513538    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:50.513614    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:50.525481    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:50.525562    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:50.537277    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:50.537360    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:50.549372    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:50.549446    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:50.561132    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:50.561208    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:50.572479    6499 logs.go:276] 0 containers: []
	W0718 21:16:50.572492    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:50.572557    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:50.586092    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:50.586126    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:50.586132    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:50.625414    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:50.625433    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:50.644546    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:50.644563    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:50.662683    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:50.662696    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:50.675255    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:50.675267    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:50.688083    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:50.688096    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:50.707450    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:50.707467    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:50.729161    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:50.729187    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:50.746621    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:50.746635    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:50.763794    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:50.763807    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:50.777479    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:50.777491    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:50.789933    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:50.789946    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:50.794505    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:50.794518    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:50.838676    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:50.838689    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:50.854665    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:50.854684    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:50.867883    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:50.867895    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:50.894790    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:50.894812    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:53.410823    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:58.412850    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:58.413098    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:58.435373    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:58.435492    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:58.450234    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:58.450312    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:58.462265    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:58.462340    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:58.473385    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:58.473456    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:58.483721    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:58.483795    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:58.494111    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:58.494175    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:58.504123    6499 logs.go:276] 0 containers: []
	W0718 21:16:58.504132    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:58.504183    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:58.514523    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:58.514543    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:58.514548    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:58.531282    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:58.531292    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:58.546393    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:58.546404    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:58.557738    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:58.557749    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:58.569083    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:58.569093    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:58.579910    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:58.579924    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:58.602272    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:58.602279    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:58.637091    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:58.637098    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:58.650408    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:58.650418    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:58.662277    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:58.662288    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:58.696646    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:58.696660    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:58.713793    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:58.713804    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:58.725568    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:58.725578    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:58.730268    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:58.730276    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:58.748514    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:58.748528    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:58.762066    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:58.762075    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:58.776160    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:58.776170    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:17:01.292819    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:06.294939    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:06.295087    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:06.307302    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:17:06.307372    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:06.317598    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:17:06.317663    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:06.328444    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:17:06.328515    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:06.338916    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:17:06.338984    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:06.349414    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:17:06.349475    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:06.360099    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:17:06.360164    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:06.370328    6499 logs.go:276] 0 containers: []
	W0718 21:17:06.370343    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:06.370405    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:06.382171    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:17:06.382188    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:17:06.382193    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:06.395176    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:06.395188    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:06.434432    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:06.434440    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:06.439374    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:06.439384    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:06.474748    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:17:06.474760    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:17:06.488915    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:17:06.488924    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:17:06.500164    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:17:06.500176    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:17:06.511846    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:17:06.511857    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:17:06.523792    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:06.523802    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:06.548229    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:17:06.548236    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:17:06.565367    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:17:06.565377    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:17:06.578611    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:17:06.578624    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:17:06.594153    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:17:06.594163    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:17:06.608803    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:17:06.608812    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:17:06.623280    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:17:06.623299    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:17:06.641321    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:17:06.641332    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:17:06.654659    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:17:06.654668    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:17:09.168067    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:14.170311    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:14.170485    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:14.182504    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:17:14.182580    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:14.193529    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:17:14.193609    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:14.204133    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:17:14.204204    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:14.215215    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:17:14.215291    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:14.230233    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:17:14.230302    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:14.240947    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:17:14.241018    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:14.251654    6499 logs.go:276] 0 containers: []
	W0718 21:17:14.251668    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:14.251725    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:14.262420    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:17:14.262438    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:14.262444    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:14.298017    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:14.298026    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:14.337500    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:17:14.337512    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:17:14.350111    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:17:14.350124    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:17:14.362731    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:14.362744    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:14.387838    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:14.387865    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:14.392901    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:17:14.392913    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:17:14.408529    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:17:14.408541    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:17:14.420611    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:17:14.420624    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:17:14.439298    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:17:14.439318    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:17:14.455516    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:17:14.455528    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:17:14.472126    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:17:14.472141    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:17:14.486021    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:17:14.486032    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:17:14.499975    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:17:14.499988    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:17:14.513366    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:17:14.513379    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:14.525812    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:17:14.525825    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:17:14.541348    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:17:14.541365    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:17:17.059707    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:22.061872    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:22.062055    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:22.081915    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:17:22.082002    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:22.097120    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:17:22.097196    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:22.109754    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:17:22.109821    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:22.121045    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:17:22.121126    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:22.131206    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:17:22.131272    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:22.145715    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:17:22.145786    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:22.155668    6499 logs.go:276] 0 containers: []
	W0718 21:17:22.155679    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:22.155730    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:22.166627    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:17:22.166642    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:17:22.166648    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:17:22.181340    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:17:22.181350    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:17:22.192678    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:22.192689    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:22.216090    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:22.216097    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:22.252895    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:17:22.252903    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:17:22.264404    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:17:22.264418    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:17:22.278263    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:17:22.278273    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:17:22.290040    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:17:22.290050    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:17:22.306690    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:17:22.306701    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:17:22.318177    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:17:22.318186    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:17:22.329709    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:17:22.329718    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:22.349181    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:22.349192    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:22.353785    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:22.353792    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:22.392988    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:17:22.393001    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:17:22.407029    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:17:22.407040    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:17:22.418114    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:17:22.418126    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:17:22.432153    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:17:22.432167    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:17:24.951057    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:29.953286    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:29.953455    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:29.964727    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:17:29.964809    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:29.975922    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:17:29.975996    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:29.986472    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:17:29.986543    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:29.997053    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:17:29.997124    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:30.008788    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:17:30.008860    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:30.021003    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:17:30.021079    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:30.033441    6499 logs.go:276] 0 containers: []
	W0718 21:17:30.033453    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:30.033518    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:30.045832    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:17:30.045852    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:17:30.045857    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:17:30.060371    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:17:30.060384    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:17:30.079847    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:17:30.079861    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:30.091453    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:17:30.091469    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:17:30.103058    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:30.103068    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:30.137496    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:17:30.137508    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:17:30.149392    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:17:30.149405    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:17:30.164407    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:17:30.164418    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:17:30.177046    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:17:30.177059    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:17:30.188100    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:30.188109    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:30.211551    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:30.211559    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:30.215933    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:17:30.215940    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:17:30.229249    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:17:30.229260    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:17:30.240653    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:17:30.240665    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:17:30.257609    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:30.257620    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:30.292942    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:17:30.292948    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:17:30.306902    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:17:30.306915    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:17:32.822549    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:37.824839    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:37.825189    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:37.856104    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:17:37.856208    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:37.873077    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:17:37.873162    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:37.886797    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:17:37.886862    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:37.898235    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:17:37.898297    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:37.908192    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:17:37.908276    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:37.918784    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:17:37.918846    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:37.929726    6499 logs.go:276] 0 containers: []
	W0718 21:17:37.929740    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:37.929801    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:37.940620    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:17:37.940638    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:37.940642    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:37.976507    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:17:37.976518    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:17:37.988154    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:17:37.988166    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:17:37.999775    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:37.999802    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:38.010449    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:17:38.010460    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:17:38.024940    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:17:38.024950    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:17:38.036416    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:17:38.036428    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:17:38.047757    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:17:38.047769    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:17:38.059374    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:17:38.059384    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:17:38.077361    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:38.077372    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:38.101081    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:38.101091    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:38.138020    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:17:38.138030    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:17:38.152226    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:17:38.152239    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:17:38.165701    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:17:38.165711    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:17:38.176681    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:17:38.176696    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:17:38.191216    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:17:38.191225    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:38.203095    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:17:38.203106    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:17:40.719445    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:45.721750    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:45.721881    6499 kubeadm.go:597] duration metric: took 4m3.895455209s to restartPrimaryControlPlane
	W0718 21:17:45.721984    6499 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0718 21:17:45.722027    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0718 21:17:46.709525    6499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 21:17:46.714449    6499 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 21:17:46.717490    6499 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 21:17:46.720076    6499 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 21:17:46.720082    6499 kubeadm.go:157] found existing configuration files:
	
	I0718 21:17:46.720103    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/admin.conf
	I0718 21:17:46.722707    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 21:17:46.722728    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 21:17:46.725726    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/kubelet.conf
	I0718 21:17:46.728250    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 21:17:46.728270    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 21:17:46.731033    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/controller-manager.conf
	I0718 21:17:46.734142    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 21:17:46.734164    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 21:17:46.736751    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/scheduler.conf
	I0718 21:17:46.739192    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 21:17:46.739211    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 21:17:46.742165    6499 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0718 21:17:46.758719    6499 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0718 21:17:46.758755    6499 kubeadm.go:310] [preflight] Running pre-flight checks
	I0718 21:17:46.805704    6499 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 21:17:46.805806    6499 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 21:17:46.805957    6499 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 21:17:46.856216    6499 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 21:17:46.860395    6499 out.go:204]   - Generating certificates and keys ...
	I0718 21:17:46.860512    6499 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0718 21:17:46.860604    6499 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0718 21:17:46.860646    6499 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0718 21:17:46.860707    6499 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0718 21:17:46.860775    6499 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0718 21:17:46.860800    6499 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0718 21:17:46.860831    6499 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0718 21:17:46.860924    6499 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0718 21:17:46.860970    6499 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0718 21:17:46.861008    6499 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0718 21:17:46.861030    6499 kubeadm.go:310] [certs] Using the existing "sa" key
	I0718 21:17:46.861146    6499 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 21:17:47.010302    6499 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 21:17:47.083311    6499 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 21:17:47.249513    6499 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 21:17:47.404311    6499 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 21:17:47.437240    6499 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 21:17:47.437623    6499 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 21:17:47.437677    6499 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0718 21:17:47.524123    6499 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 21:17:47.526982    6499 out.go:204]   - Booting up control plane ...
	I0718 21:17:47.527027    6499 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 21:17:47.529015    6499 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 21:17:47.529375    6499 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 21:17:47.529632    6499 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 21:17:47.530447    6499 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0718 21:17:52.032381    6499 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501895 seconds
	I0718 21:17:52.032470    6499 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 21:17:52.042268    6499 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 21:17:52.563288    6499 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0718 21:17:52.563613    6499 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-511000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 21:17:53.069869    6499 kubeadm.go:310] [bootstrap-token] Using token: jyjzy8.eevyqmaux8ek27ts
	I0718 21:17:53.075688    6499 out.go:204]   - Configuring RBAC rules ...
	I0718 21:17:53.075785    6499 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 21:17:53.075856    6499 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 21:17:53.082746    6499 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 21:17:53.083986    6499 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 21:17:53.085372    6499 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 21:17:53.086679    6499 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 21:17:53.091172    6499 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 21:17:53.262083    6499 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0718 21:17:53.475520    6499 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0718 21:17:53.475972    6499 kubeadm.go:310] 
	I0718 21:17:53.476009    6499 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0718 21:17:53.476017    6499 kubeadm.go:310] 
	I0718 21:17:53.476079    6499 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0718 21:17:53.476083    6499 kubeadm.go:310] 
	I0718 21:17:53.476098    6499 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0718 21:17:53.476132    6499 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 21:17:53.476160    6499 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 21:17:53.476164    6499 kubeadm.go:310] 
	I0718 21:17:53.476201    6499 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0718 21:17:53.476205    6499 kubeadm.go:310] 
	I0718 21:17:53.476228    6499 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 21:17:53.476231    6499 kubeadm.go:310] 
	I0718 21:17:53.476262    6499 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0718 21:17:53.476303    6499 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 21:17:53.476339    6499 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 21:17:53.476344    6499 kubeadm.go:310] 
	I0718 21:17:53.476397    6499 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0718 21:17:53.476448    6499 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0718 21:17:53.476451    6499 kubeadm.go:310] 
	I0718 21:17:53.476519    6499 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jyjzy8.eevyqmaux8ek27ts \
	I0718 21:17:53.476577    6499 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc \
	I0718 21:17:53.476588    6499 kubeadm.go:310] 	--control-plane 
	I0718 21:17:53.476591    6499 kubeadm.go:310] 
	I0718 21:17:53.476630    6499 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0718 21:17:53.476633    6499 kubeadm.go:310] 
	I0718 21:17:53.476679    6499 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jyjzy8.eevyqmaux8ek27ts \
	I0718 21:17:53.476745    6499 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc 
	I0718 21:17:53.476800    6499 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 21:17:53.476808    6499 cni.go:84] Creating CNI manager for ""
	I0718 21:17:53.476815    6499 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:17:53.479862    6499 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0718 21:17:53.486805    6499 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0718 21:17:53.490268    6499 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0718 21:17:53.495108    6499 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 21:17:53.495155    6499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 21:17:53.495186    6499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-511000 minikube.k8s.io/updated_at=2024_07_18T21_17_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=running-upgrade-511000 minikube.k8s.io/primary=true
	I0718 21:17:53.536798    6499 kubeadm.go:1113] duration metric: took 41.684625ms to wait for elevateKubeSystemPrivileges
	I0718 21:17:53.536807    6499 ops.go:34] apiserver oom_adj: -16
	I0718 21:17:53.536815    6499 kubeadm.go:394] duration metric: took 4m11.723757042s to StartCluster
	I0718 21:17:53.536825    6499 settings.go:142] acquiring lock: {Name:mk9577e2a46ebc5e017130011eb528f9fea1ed10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:17:53.536907    6499 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:17:53.537288    6499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:17:53.537493    6499 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:17:53.537498    6499 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 21:17:53.537537    6499 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-511000"
	I0718 21:17:53.537553    6499 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-511000"
	W0718 21:17:53.537560    6499 addons.go:243] addon storage-provisioner should already be in state true
	I0718 21:17:53.537586    6499 host.go:66] Checking if "running-upgrade-511000" exists ...
	I0718 21:17:53.537604    6499 config.go:182] Loaded profile config "running-upgrade-511000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:17:53.537592    6499 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-511000"
	I0718 21:17:53.537639    6499 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-511000"
	I0718 21:17:53.537864    6499 retry.go:31] will retry after 1.070439171s: connect: dial unix /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/running-upgrade-511000/monitor: connect: connection refused
	I0718 21:17:53.538526    6499 kapi.go:59] client config for running-upgrade-511000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040ff790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 21:17:53.538659    6499 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-511000"
	W0718 21:17:53.538663    6499 addons.go:243] addon default-storageclass should already be in state true
	I0718 21:17:53.538671    6499 host.go:66] Checking if "running-upgrade-511000" exists ...
	I0718 21:17:53.539186    6499 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0718 21:17:53.539191    6499 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0718 21:17:53.539196    6499 sshutil.go:53] new ssh client: &{IP:localhost Port:50284 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/running-upgrade-511000/id_rsa Username:docker}
	I0718 21:17:53.541617    6499 out.go:177] * Verifying Kubernetes components...
	I0718 21:17:53.549802    6499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:17:53.642546    6499 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 21:17:53.647647    6499 api_server.go:52] waiting for apiserver process to appear ...
	I0718 21:17:53.647703    6499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:17:53.650805    6499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0718 21:17:53.655514    6499 api_server.go:72] duration metric: took 118.013084ms to wait for apiserver process to appear ...
	I0718 21:17:53.655523    6499 api_server.go:88] waiting for apiserver healthz status ...
	I0718 21:17:53.655531    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:54.614494    6499 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:17:54.618473    6499 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 21:17:54.618483    6499 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0718 21:17:54.618497    6499 sshutil.go:53] new ssh client: &{IP:localhost Port:50284 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/running-upgrade-511000/id_rsa Username:docker}
	I0718 21:17:54.650739    6499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 21:17:58.657545    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:58.657583    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:03.657796    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:03.657838    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:08.658098    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:08.658119    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:13.658394    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:13.658420    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:18.659252    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:18.659312    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:23.660429    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:23.660474    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0718 21:18:23.964223    6499 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0718 21:18:23.969606    6499 out.go:177] * Enabled addons: storage-provisioner
	I0718 21:18:23.977407    6499 addons.go:510] duration metric: took 30.440787s for enable addons: enabled=[storage-provisioner]
	I0718 21:18:28.661674    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:28.661713    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:33.663193    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:33.663233    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:38.665427    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:38.665461    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:43.667588    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:43.667612    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:48.669696    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:48.669751    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:53.671828    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:53.671916    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:53.682300    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:18:53.682372    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:53.693485    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:18:53.693556    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:53.704118    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:18:53.704188    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:53.715355    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:18:53.715427    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:53.731475    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:18:53.731562    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:53.742147    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:18:53.742214    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:53.752462    6499 logs.go:276] 0 containers: []
	W0718 21:18:53.752475    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:53.752539    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:53.762604    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:18:53.762616    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:53.762622    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:53.797786    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:53.797798    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:53.802755    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:18:53.802762    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:18:53.816919    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:18:53.816929    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:18:53.829068    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:18:53.829087    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:18:53.840666    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:18:53.840677    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:18:53.852392    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:18:53.852402    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:18:53.869519    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:53.869530    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:53.908030    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:18:53.908044    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:18:53.922575    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:18:53.922585    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:18:53.936908    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:18:53.936918    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:18:53.948259    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:53.948272    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:53.971357    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:18:53.971364    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:56.484261    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:01.486474    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:01.486703    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:01.506839    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:01.506924    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:01.521821    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:01.521898    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:01.533841    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:01.533909    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:01.545304    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:01.545385    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:01.556195    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:01.556263    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:01.567321    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:01.567384    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:01.583295    6499 logs.go:276] 0 containers: []
	W0718 21:19:01.583311    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:01.583372    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:01.594636    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:01.594653    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:01.594659    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:01.629617    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:01.629626    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:01.645494    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:01.645505    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:01.657468    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:01.657479    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:01.672739    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:01.672750    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:01.687865    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:01.687875    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:01.706660    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:01.706676    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:01.724432    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:01.724442    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:01.729145    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:01.729151    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:01.763808    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:01.763818    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:01.782489    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:01.782498    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:01.796439    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:01.796451    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:01.820984    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:01.820994    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:04.334670    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:09.337002    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:09.337323    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:09.372047    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:09.372176    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:09.391269    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:09.391378    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:09.406051    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:09.406125    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:09.418353    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:09.418428    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:09.429269    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:09.429344    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:09.441206    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:09.441276    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:09.453948    6499 logs.go:276] 0 containers: []
	W0718 21:19:09.453960    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:09.454021    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:09.464469    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:09.464484    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:09.464489    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:09.476388    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:09.476399    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:09.481030    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:09.481037    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:09.524847    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:09.524857    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:09.539367    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:09.539378    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:09.551161    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:09.551174    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:09.565993    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:09.566003    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:09.578252    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:09.578264    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:09.613544    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:09.613551    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:09.627312    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:09.627324    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:09.641196    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:09.641214    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:09.653636    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:09.653647    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:09.671943    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:09.671953    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:12.196434    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:17.198699    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:17.198921    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:17.223860    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:17.223980    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:17.240630    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:17.240719    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:17.254385    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:17.254454    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:17.265280    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:17.265344    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:17.276500    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:17.276575    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:17.286941    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:17.287012    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:17.297538    6499 logs.go:276] 0 containers: []
	W0718 21:19:17.297554    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:17.297610    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:17.308057    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:17.308073    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:17.308083    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:17.322424    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:17.322436    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:17.333938    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:17.333950    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:17.352010    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:17.352023    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:17.363085    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:17.363095    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:17.386515    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:17.386522    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:17.399553    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:17.399566    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:17.434414    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:17.434421    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:17.468789    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:17.468803    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:17.487468    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:17.487478    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:17.499203    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:17.499216    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:17.514292    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:17.514301    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:17.525758    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:17.525769    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:20.032156    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:25.034411    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:25.034545    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:25.048298    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:25.048377    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:25.060536    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:25.060607    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:25.072220    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:25.072290    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:25.083141    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:25.083208    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:25.093685    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:25.093754    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:25.104141    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:25.104213    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:25.113885    6499 logs.go:276] 0 containers: []
	W0718 21:19:25.113895    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:25.113950    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:25.124093    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:25.124107    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:25.124113    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:25.135421    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:25.135431    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:25.150639    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:25.150650    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:25.169134    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:25.169146    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:25.181479    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:25.181490    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:25.186315    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:25.186321    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:25.255378    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:25.255389    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:25.269532    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:25.269542    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:25.283323    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:25.283335    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:25.297899    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:25.297911    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:25.309589    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:25.309600    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:25.334112    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:25.334124    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:25.367948    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:25.367959    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:27.882054    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:32.884186    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:32.884300    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:32.895896    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:32.895971    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:32.906265    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:32.906332    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:32.917047    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:32.917117    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:32.927270    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:32.927352    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:32.937743    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:32.937811    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:32.948284    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:32.948351    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:32.958329    6499 logs.go:276] 0 containers: []
	W0718 21:19:32.958341    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:32.958403    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:32.972104    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:32.972120    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:32.972129    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:33.009972    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:33.009983    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:33.027963    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:33.027973    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:33.039340    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:33.039350    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:33.051223    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:33.051233    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:33.068621    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:33.068631    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:33.079941    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:33.079951    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:33.114656    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:33.114664    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:33.118809    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:33.118817    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:33.137372    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:33.137381    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:33.149202    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:33.149214    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:33.160559    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:33.160572    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:33.174829    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:33.174842    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:35.699870    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:40.702280    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:40.702431    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:40.713929    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:40.713999    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:40.724774    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:40.724850    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:40.739386    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:40.739457    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:40.757320    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:40.757394    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:40.768310    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:40.768384    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:40.778994    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:40.779062    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:40.789159    6499 logs.go:276] 0 containers: []
	W0718 21:19:40.789172    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:40.789231    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:40.800011    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:40.800026    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:40.800031    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:40.814865    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:40.814876    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:40.826828    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:40.826839    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:40.845187    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:40.845199    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:40.870417    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:40.870424    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:40.903468    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:40.903478    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:40.918389    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:40.918400    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:40.932151    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:40.932162    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:40.947820    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:40.947831    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:40.959522    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:40.959532    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:40.971369    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:40.971378    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:40.983134    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:40.983143    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:40.987557    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:40.987564    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:43.524315    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:48.526551    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:48.526641    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:48.545525    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:48.545601    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:48.563690    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:48.563760    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:48.573969    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:48.574041    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:48.584588    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:48.584659    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:48.595589    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:48.595660    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:48.605879    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:48.605943    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:48.616037    6499 logs.go:276] 0 containers: []
	W0718 21:19:48.616047    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:48.616105    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:48.631060    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:48.631080    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:48.631087    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:48.649808    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:48.649819    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:48.664012    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:48.664021    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:48.675900    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:48.675911    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:48.687140    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:48.687150    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:48.711662    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:48.711669    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:48.723364    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:48.723377    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:48.758291    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:48.758299    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:48.762384    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:48.762390    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:48.798739    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:48.798750    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:48.813801    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:48.813810    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:48.826522    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:48.826533    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:48.844180    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:48.844193    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:51.360076    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:56.360287    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:56.360383    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:56.371782    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:56.371855    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:56.387949    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:56.388033    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:56.399753    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:56.399828    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:56.411448    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:56.411522    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:56.422168    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:56.422237    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:56.432389    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:56.432462    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:56.443276    6499 logs.go:276] 0 containers: []
	W0718 21:19:56.443288    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:56.443351    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:56.455420    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:56.455436    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:56.455441    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:56.460512    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:56.460523    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:56.476279    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:56.476289    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:56.501370    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:56.501380    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:56.512370    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:56.512381    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:56.524318    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:56.524328    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:56.538770    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:56.538783    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:56.550375    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:56.550387    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:56.562544    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:56.562554    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:56.595471    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:56.595479    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:56.630812    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:56.630827    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:56.645657    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:56.645667    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:56.660473    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:56.660486    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:59.185625    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:04.187758    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:04.187847    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:04.199051    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:04.199114    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:04.209838    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:04.209910    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:04.220882    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:04.220963    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:04.232872    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:04.232942    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:04.244292    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:04.244364    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:04.260452    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:04.260527    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:04.272088    6499 logs.go:276] 0 containers: []
	W0718 21:20:04.272101    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:04.272164    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:04.283307    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:04.283322    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:04.283328    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:04.297282    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:04.297296    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:04.310656    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:04.310670    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:04.323063    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:04.323078    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:04.338862    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:04.338876    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:04.351378    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:04.351388    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:04.364162    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:04.364174    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:04.388951    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:04.388964    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:04.424765    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:04.424778    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:04.429624    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:04.429632    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:04.467510    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:04.467523    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:04.481549    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:04.481560    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:04.495580    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:04.495591    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:07.014521    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:12.014682    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:12.014758    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:12.025924    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:12.025999    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:12.038273    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:12.038341    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:12.049836    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:12.049907    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:12.063356    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:12.063433    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:12.075131    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:12.075202    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:12.086489    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:12.086555    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:12.100161    6499 logs.go:276] 0 containers: []
	W0718 21:20:12.100174    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:12.100239    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:12.111672    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:12.111690    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:12.111696    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:12.124088    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:12.124100    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:12.139694    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:12.139708    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:12.167244    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:12.167269    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:12.181293    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:12.181307    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:12.186125    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:12.186137    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:12.226609    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:12.226620    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:12.239047    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:12.239059    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:12.257467    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:12.257479    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:12.270127    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:12.270140    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:12.284504    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:12.284512    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:12.297074    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:12.297082    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:12.309644    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:12.309656    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:12.322156    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:12.322166    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:12.355813    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:12.355822    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:14.874950    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:19.877015    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:19.877089    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:19.888636    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:19.888706    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:19.900901    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:19.900974    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:19.913235    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:19.913310    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:19.924801    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:19.924873    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:19.935948    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:19.936017    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:19.951950    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:19.952023    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:19.963747    6499 logs.go:276] 0 containers: []
	W0718 21:20:19.963758    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:19.963820    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:19.975414    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:19.975432    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:19.975437    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:19.988610    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:19.988619    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:20.004814    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:20.004827    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:20.023810    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:20.023826    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:20.061084    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:20.061098    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:20.104995    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:20.105008    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:20.128497    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:20.128509    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:20.158030    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:20.158041    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:20.172507    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:20.172518    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:20.197890    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:20.197908    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:20.215482    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:20.215493    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:20.231466    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:20.231478    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:20.244789    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:20.244805    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:20.257590    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:20.257603    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:20.262149    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:20.262158    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:22.782538    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:27.784655    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:27.784914    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:27.811401    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:27.811500    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:27.829242    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:27.829319    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:27.843292    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:27.843364    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:27.856070    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:27.856101    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:27.869291    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:27.869339    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:27.880502    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:27.880548    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:27.895482    6499 logs.go:276] 0 containers: []
	W0718 21:20:27.895494    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:27.895552    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:27.907313    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:27.907331    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:27.907337    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:27.943363    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:27.943379    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:27.948705    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:27.948713    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:27.961127    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:27.961137    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:27.987582    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:27.987599    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:28.003221    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:28.003230    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:28.018326    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:28.018338    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:28.031223    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:28.031234    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:28.043529    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:28.043540    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:28.084352    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:28.084363    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:28.096692    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:28.096703    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:28.115068    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:28.115083    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:28.133634    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:28.133650    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:28.146023    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:28.146035    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:28.160118    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:28.160128    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:30.677649    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:35.679059    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:35.679499    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:35.717580    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:35.717755    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:35.738929    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:35.739027    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:35.755980    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:35.756060    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:35.769979    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:35.770052    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:35.781787    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:35.781861    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:35.793102    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:35.793178    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:35.804546    6499 logs.go:276] 0 containers: []
	W0718 21:20:35.804557    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:35.804617    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:35.816201    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:35.816220    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:35.816226    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:35.834851    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:35.834860    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:35.850434    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:35.850442    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:35.855310    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:35.855326    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:35.870599    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:35.870611    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:35.885844    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:35.885859    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:35.898896    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:35.898907    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:35.914331    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:35.914339    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:35.928170    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:35.928182    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:35.940817    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:35.940829    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:35.953516    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:35.953526    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:35.978015    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:35.978034    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:36.016308    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:36.016320    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:36.055447    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:36.055456    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:36.069035    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:36.069048    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:38.584636    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:43.586902    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:43.587038    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:43.602443    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:43.602520    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:43.613161    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:43.613238    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:43.623434    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:43.623507    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:43.638458    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:43.638523    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:43.648838    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:43.648909    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:43.659117    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:43.659182    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:43.670625    6499 logs.go:276] 0 containers: []
	W0718 21:20:43.670636    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:43.670694    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:43.682063    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:43.682082    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:43.682088    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:43.694799    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:43.694811    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:43.713359    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:43.713368    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:43.755757    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:43.755782    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:43.769533    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:43.769543    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:43.795891    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:43.795904    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:43.808986    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:43.808997    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:43.844215    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:43.844225    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:43.860068    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:43.860086    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:43.875198    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:43.875212    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:43.880186    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:43.880195    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:43.893423    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:43.893436    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:43.906209    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:43.906221    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:43.919070    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:43.919084    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:43.934684    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:43.934695    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:46.449873    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:51.451938    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:51.452081    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:51.465073    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:51.465153    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:51.476142    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:51.476220    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:51.487170    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:51.487241    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:51.497633    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:51.497708    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:51.508142    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:51.508217    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:51.518612    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:51.518685    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:51.528476    6499 logs.go:276] 0 containers: []
	W0718 21:20:51.528486    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:51.528542    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:51.539565    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:51.539581    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:51.539586    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:51.551124    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:51.551136    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:51.566055    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:51.566066    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:51.599565    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:51.599579    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:51.614433    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:51.614445    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:51.627587    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:51.627601    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:51.647069    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:51.647080    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:51.674119    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:51.674132    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:51.713124    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:51.713136    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:51.726241    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:51.726254    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:51.742988    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:51.742999    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:51.758924    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:51.758936    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:51.771778    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:51.771791    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:51.776740    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:51.776749    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:51.790167    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:51.790179    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:54.306820    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:59.309462    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:59.309822    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:59.347475    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:59.347610    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:59.364222    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:59.364311    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:59.377542    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:59.377611    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:59.389079    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:59.389152    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:59.400523    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:59.400590    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:59.411348    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:59.411414    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:59.422141    6499 logs.go:276] 0 containers: []
	W0718 21:20:59.422160    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:59.422224    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:59.432906    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:59.432925    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:59.432930    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:59.453667    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:59.453678    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:59.472006    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:59.472017    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:59.507232    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:59.507242    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:59.522418    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:59.522429    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:59.534947    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:59.534959    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:59.551628    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:59.551640    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:59.564362    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:59.564373    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:59.577295    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:59.577311    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:59.616442    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:59.616456    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:59.635820    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:59.635830    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:59.648033    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:59.648044    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:59.660217    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:59.660229    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:59.684090    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:59.684106    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:59.688966    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:59.688976    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:02.204019    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:07.206118    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:07.206197    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:07.217975    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:07.218040    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:07.229632    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:07.229704    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:07.245758    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:07.245833    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:07.256484    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:07.256549    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:07.268183    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:07.268248    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:07.279603    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:07.279671    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:07.291100    6499 logs.go:276] 0 containers: []
	W0718 21:21:07.291113    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:07.291173    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:07.302035    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:07.302055    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:07.302060    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:07.315520    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:07.315533    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:07.327998    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:07.328011    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:07.355051    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:07.355070    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:07.392795    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:07.392812    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:07.409325    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:07.409337    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:07.422345    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:07.422358    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:07.435233    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:07.435244    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:07.452675    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:07.452687    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:07.477647    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:07.477657    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:07.496441    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:07.496453    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:07.509535    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:07.509548    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:07.514749    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:07.514760    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:07.554088    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:07.554107    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:07.567574    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:07.567586    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:10.085284    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:15.087355    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:15.087569    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:15.103105    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:15.103197    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:15.115412    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:15.115486    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:15.125845    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:15.125920    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:15.136137    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:15.136208    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:15.146724    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:15.146794    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:15.157857    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:15.157923    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:15.168136    6499 logs.go:276] 0 containers: []
	W0718 21:21:15.168146    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:15.168206    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:15.178560    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:15.178577    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:15.178582    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:15.213580    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:15.213589    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:15.250687    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:15.250697    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:15.262550    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:15.262561    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:15.275223    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:15.275235    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:15.280163    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:15.280170    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:15.294664    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:15.294678    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:15.306719    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:15.306729    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:15.330529    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:15.330536    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:15.348557    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:15.348567    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:15.360921    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:15.360932    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:15.372951    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:15.372962    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:15.388651    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:15.388661    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:15.405274    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:15.405284    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:15.419966    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:15.419977    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:17.940712    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:22.942814    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:22.943052    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:22.969569    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:22.969672    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:22.985883    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:22.985959    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:23.000189    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:23.000263    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:23.011508    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:23.011569    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:23.022321    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:23.022387    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:23.034008    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:23.034079    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:23.044103    6499 logs.go:276] 0 containers: []
	W0718 21:21:23.044115    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:23.044170    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:23.055596    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:23.055612    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:23.055617    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:23.073399    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:23.073409    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:23.107917    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:23.107927    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:23.122160    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:23.122172    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:23.136014    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:23.136024    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:23.148233    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:23.148249    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:23.161046    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:23.161059    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:23.173336    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:23.173353    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:23.206792    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:23.206800    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:23.211511    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:23.211520    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:23.226396    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:23.226406    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:23.238427    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:23.238438    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:23.250898    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:23.250908    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:23.268624    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:23.268634    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:23.284842    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:23.284851    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:25.810318    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:30.812444    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:30.812609    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:30.824679    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:30.824750    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:30.835379    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:30.835451    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:30.846157    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:30.846225    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:30.861817    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:30.861885    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:30.872261    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:30.872333    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:30.882619    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:30.882688    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:30.892796    6499 logs.go:276] 0 containers: []
	W0718 21:21:30.892808    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:30.892867    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:30.903712    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:30.903730    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:30.903736    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:30.921123    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:30.921132    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:30.933050    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:30.933064    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:30.937710    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:30.937717    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:30.952448    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:30.952460    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:30.967220    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:30.967230    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:30.979140    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:30.979151    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:30.991012    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:30.991025    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:31.002666    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:31.002675    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:31.014649    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:31.014662    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:31.039981    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:31.039995    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:31.076613    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:31.076631    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:31.113236    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:31.113247    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:31.127417    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:31.127429    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:31.139201    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:31.139213    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:33.650964    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:38.651292    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:38.651520    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:38.669017    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:38.669102    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:38.682068    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:38.682141    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:38.693693    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:38.693756    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:38.703932    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:38.703998    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:38.714604    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:38.714680    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:38.725048    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:38.725115    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:38.736491    6499 logs.go:276] 0 containers: []
	W0718 21:21:38.736503    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:38.736559    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:38.746660    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:38.746675    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:38.746680    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:38.761346    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:38.761360    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:38.773378    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:38.773391    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:38.785256    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:38.785268    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:38.789708    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:38.789714    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:38.804988    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:38.805002    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:38.816372    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:38.816383    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:38.849082    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:38.849093    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:38.860888    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:38.860898    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:38.884445    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:38.884458    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:38.895694    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:38.895706    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:38.930700    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:38.930715    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:38.947075    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:38.947085    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:38.958944    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:38.958955    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:38.971289    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:38.971304    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:41.495510    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:46.497665    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:46.497940    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:46.522380    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:46.522510    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:46.541012    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:46.541101    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:46.556252    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:46.556320    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:46.566625    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:46.566691    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:46.581608    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:46.581672    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:46.592110    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:46.592183    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:46.602537    6499 logs.go:276] 0 containers: []
	W0718 21:21:46.602549    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:46.602601    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:46.613123    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:46.613141    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:46.613146    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:46.627457    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:46.627470    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:46.639921    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:46.639934    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:46.674439    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:46.674451    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:46.686099    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:46.686113    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:46.698168    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:46.698180    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:46.710126    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:46.710136    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:46.735140    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:46.735151    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:46.769964    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:46.769974    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:46.784026    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:46.784037    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:46.795501    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:46.795511    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:46.808568    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:46.808578    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:46.822699    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:46.822710    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:46.834812    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:46.834824    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:46.839630    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:46.839636    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:49.359381    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:54.361493    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:54.366042    6499 out.go:177] 
	W0718 21:21:54.369994    6499 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0718 21:21:54.370003    6499 out.go:239] * 
	* 
	W0718 21:21:54.370747    6499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:21:54.381969    6499 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-511000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-18 21:21:54.482159 -0700 PDT m=+3424.587875126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-511000 -n running-upgrade-511000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-511000 -n running-upgrade-511000: exit status 2 (15.583764916s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-511000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-439000          | force-systemd-flag-439000 | jenkins | v1.33.1 | 18 Jul 24 21:11 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-598000              | force-systemd-env-598000  | jenkins | v1.33.1 | 18 Jul 24 21:11 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-598000           | force-systemd-env-598000  | jenkins | v1.33.1 | 18 Jul 24 21:11 PDT | 18 Jul 24 21:11 PDT |
	| start   | -p docker-flags-199000                | docker-flags-199000       | jenkins | v1.33.1 | 18 Jul 24 21:11 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-439000             | force-systemd-flag-439000 | jenkins | v1.33.1 | 18 Jul 24 21:11 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-439000          | force-systemd-flag-439000 | jenkins | v1.33.1 | 18 Jul 24 21:11 PDT | 18 Jul 24 21:11 PDT |
	| start   | -p cert-expiration-240000             | cert-expiration-240000    | jenkins | v1.33.1 | 18 Jul 24 21:11 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-199000 ssh               | docker-flags-199000       | jenkins | v1.33.1 | 18 Jul 24 21:12 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-199000 ssh               | docker-flags-199000       | jenkins | v1.33.1 | 18 Jul 24 21:12 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-199000                | docker-flags-199000       | jenkins | v1.33.1 | 18 Jul 24 21:12 PDT | 18 Jul 24 21:12 PDT |
	| start   | -p cert-options-935000                | cert-options-935000       | jenkins | v1.33.1 | 18 Jul 24 21:12 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-935000 ssh               | cert-options-935000       | jenkins | v1.33.1 | 18 Jul 24 21:12 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-935000 -- sudo        | cert-options-935000       | jenkins | v1.33.1 | 18 Jul 24 21:12 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-935000                | cert-options-935000       | jenkins | v1.33.1 | 18 Jul 24 21:12 PDT | 18 Jul 24 21:12 PDT |
	| start   | -p running-upgrade-511000             | minikube                  | jenkins | v1.26.0 | 18 Jul 24 21:12 PDT | 18 Jul 24 21:13 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-511000             | running-upgrade-511000    | jenkins | v1.33.1 | 18 Jul 24 21:13 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-240000             | cert-expiration-240000    | jenkins | v1.33.1 | 18 Jul 24 21:15 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-240000             | cert-expiration-240000    | jenkins | v1.33.1 | 18 Jul 24 21:15 PDT | 18 Jul 24 21:15 PDT |
	| start   | -p kubernetes-upgrade-797000          | kubernetes-upgrade-797000 | jenkins | v1.33.1 | 18 Jul 24 21:15 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-797000          | kubernetes-upgrade-797000 | jenkins | v1.33.1 | 18 Jul 24 21:15 PDT | 18 Jul 24 21:15 PDT |
	| start   | -p kubernetes-upgrade-797000          | kubernetes-upgrade-797000 | jenkins | v1.33.1 | 18 Jul 24 21:15 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-797000          | kubernetes-upgrade-797000 | jenkins | v1.33.1 | 18 Jul 24 21:15 PDT | 18 Jul 24 21:15 PDT |
	| start   | -p stopped-upgrade-465000             | minikube                  | jenkins | v1.26.0 | 18 Jul 24 21:15 PDT | 18 Jul 24 21:16 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-465000 stop           | minikube                  | jenkins | v1.26.0 | 18 Jul 24 21:16 PDT | 18 Jul 24 21:16 PDT |
	| start   | -p stopped-upgrade-465000             | stopped-upgrade-465000    | jenkins | v1.33.1 | 18 Jul 24 21:16 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 21:16:26
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 21:16:26.321568    6638 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:16:26.321744    6638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:16:26.321748    6638 out.go:304] Setting ErrFile to fd 2...
	I0718 21:16:26.321751    6638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:16:26.321911    6638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:16:26.323153    6638 out.go:298] Setting JSON to false
	I0718 21:16:26.343393    6638 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4554,"bootTime":1721358032,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:16:26.343465    6638 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:16:26.348386    6638 out.go:177] * [stopped-upgrade-465000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:16:26.356294    6638 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:16:26.356399    6638 notify.go:220] Checking for updates...
	I0718 21:16:26.362200    6638 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:16:26.365333    6638 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:16:26.368374    6638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:16:26.369602    6638 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:16:26.372396    6638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:16:26.375576    6638 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:16:26.379351    6638 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0718 21:16:26.382380    6638 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:16:26.386317    6638 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 21:16:26.393312    6638 start.go:297] selected driver: qemu2
	I0718 21:16:26.393318    6638 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50535 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0718 21:16:26.393366    6638 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:16:26.395853    6638 cni.go:84] Creating CNI manager for ""
	I0718 21:16:26.395868    6638 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:16:26.395889    6638 start.go:340] cluster config:
	{Name:stopped-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50535 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0718 21:16:26.395945    6638 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:16:26.402245    6638 out.go:177] * Starting "stopped-upgrade-465000" primary control-plane node in "stopped-upgrade-465000" cluster
	I0718 21:16:26.406357    6638 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0718 21:16:26.406374    6638 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0718 21:16:26.406385    6638 cache.go:56] Caching tarball of preloaded images
	I0718 21:16:26.406452    6638 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:16:26.406457    6638 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0718 21:16:26.406504    6638 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/config.json ...
	I0718 21:16:26.406900    6638 start.go:360] acquireMachinesLock for stopped-upgrade-465000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:16:26.406925    6638 start.go:364] duration metric: took 19.916µs to acquireMachinesLock for "stopped-upgrade-465000"
	I0718 21:16:26.406932    6638 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:16:26.406938    6638 fix.go:54] fixHost starting: 
	I0718 21:16:26.407042    6638 fix.go:112] recreateIfNeeded on stopped-upgrade-465000: state=Stopped err=<nil>
	W0718 21:16:26.407054    6638 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:16:26.412326    6638 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-465000" ...
	I0718 21:16:26.792465    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:26.792613    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:26.806550    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:26.806610    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:26.818091    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:26.818159    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:26.828347    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:26.828399    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:26.838427    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:26.838490    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:26.848504    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:26.848567    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:26.859199    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:26.859265    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:26.869016    6499 logs.go:276] 0 containers: []
	W0718 21:16:26.869033    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:26.869085    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:26.885440    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:26.885460    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:26.885467    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:26.899229    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:26.899239    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:26.917127    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:26.917137    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:26.940079    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:26.940088    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:26.976234    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:26.976247    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:26.989836    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:26.989848    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:27.001625    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:27.001638    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:27.016715    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:27.016729    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:27.028076    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:27.028087    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:27.032585    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:27.032594    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:27.047247    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:27.047257    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:27.058781    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:27.058790    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:27.096321    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:27.096329    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:27.110405    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:27.110418    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:27.124839    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:27.124850    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:27.135905    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:27.135915    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:27.147307    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:27.147317    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:29.661422    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:26.416360    6638 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:16:26.416422    6638 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50500-:22,hostfwd=tcp::50501-:2376,hostname=stopped-upgrade-465000 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/disk.qcow2
	I0718 21:16:26.459992    6638 main.go:141] libmachine: STDOUT: 
	I0718 21:16:26.460019    6638 main.go:141] libmachine: STDERR: 
	I0718 21:16:26.460025    6638 main.go:141] libmachine: Waiting for VM to start (ssh -p 50500 docker@127.0.0.1)...
	I0718 21:16:34.663925    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:34.664068    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:34.680817    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:34.680892    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:34.691857    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:34.691933    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:34.702623    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:34.702700    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:34.713570    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:34.713642    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:34.724157    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:34.724219    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:34.735187    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:34.735256    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:34.745496    6499 logs.go:276] 0 containers: []
	W0718 21:16:34.745507    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:34.745563    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:34.756332    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:34.756354    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:34.756361    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:34.768444    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:34.768455    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:34.802852    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:34.802862    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:34.815463    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:34.815474    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:34.827645    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:34.827656    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:34.839541    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:34.839551    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:34.857132    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:34.857142    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:34.895503    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:34.895514    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:34.911775    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:34.911785    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:34.926796    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:34.926806    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:34.942366    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:34.942375    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:34.947377    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:34.947385    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:34.962797    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:34.962807    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:34.974568    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:34.974580    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:34.989494    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:34.989503    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:35.004075    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:35.004089    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:35.015675    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:35.015687    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:37.542080    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:42.544251    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:42.544675    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:42.585744    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:42.585834    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:42.603732    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:42.603813    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:42.621834    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:42.621894    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:42.637497    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:42.637567    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:42.657592    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:42.657663    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:42.668357    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:42.668421    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:42.678603    6499 logs.go:276] 0 containers: []
	W0718 21:16:42.678613    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:42.678668    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:42.695318    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:42.695336    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:42.695341    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:42.718593    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:42.718600    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:42.754037    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:42.754050    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:42.790918    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:42.790933    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:42.805143    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:42.805158    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:42.816144    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:42.816154    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:42.828097    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:42.828111    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:42.843129    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:42.843141    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:42.860591    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:42.860604    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:42.873510    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:42.873522    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:42.889897    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:42.889913    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:42.904281    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:42.904297    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:42.915967    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:42.915978    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:42.927171    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:42.927182    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:42.941323    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:42.941336    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:42.945508    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:42.945514    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:42.959383    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:42.959395    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:45.475760    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:46.481802    6638 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/config.json ...
	I0718 21:16:46.482611    6638 machine.go:94] provisionDockerMachine start ...
	I0718 21:16:46.482820    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:46.483403    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:46.483421    6638 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 21:16:46.570633    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 21:16:46.570663    6638 buildroot.go:166] provisioning hostname "stopped-upgrade-465000"
	I0718 21:16:46.570795    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:46.571053    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:46.571064    6638 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-465000 && echo "stopped-upgrade-465000" | sudo tee /etc/hostname
	I0718 21:16:46.653889    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-465000
	
	I0718 21:16:46.653968    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:46.654170    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:46.654186    6638 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-465000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-465000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-465000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 21:16:46.724742    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 21:16:46.724758    6638 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 21:16:46.724768    6638 buildroot.go:174] setting up certificates
	I0718 21:16:46.724773    6638 provision.go:84] configureAuth start
	I0718 21:16:46.724779    6638 provision.go:143] copyHostCerts
	I0718 21:16:46.724866    6638 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 21:16:46.724878    6638 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 21:16:46.724995    6638 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 21:16:46.725205    6638 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 21:16:46.725209    6638 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 21:16:46.725267    6638 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 21:16:46.725379    6638 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 21:16:46.725385    6638 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 21:16:46.725435    6638 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 21:16:46.725534    6638 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-465000 san=[127.0.0.1 localhost minikube stopped-upgrade-465000]
	I0718 21:16:46.863855    6638 provision.go:177] copyRemoteCerts
	I0718 21:16:46.863908    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 21:16:46.863918    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	I0718 21:16:46.899949    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 21:16:46.907371    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0718 21:16:46.914145    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 21:16:46.920736    6638 provision.go:87] duration metric: took 195.957166ms to configureAuth
	I0718 21:16:46.920745    6638 buildroot.go:189] setting minikube options for container-runtime
	I0718 21:16:46.920862    6638 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:16:46.920904    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:46.921000    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:46.921005    6638 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 21:16:46.985477    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 21:16:46.985487    6638 buildroot.go:70] root file system type: tmpfs
	I0718 21:16:46.985536    6638 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 21:16:46.985583    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:46.985697    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:46.985730    6638 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 21:16:47.052924    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 21:16:47.052982    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:47.053097    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:47.053106    6638 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 21:16:47.415093    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 21:16:47.415105    6638 machine.go:97] duration metric: took 932.510709ms to provisionDockerMachine
	I0718 21:16:47.415112    6638 start.go:293] postStartSetup for "stopped-upgrade-465000" (driver="qemu2")
	I0718 21:16:47.415119    6638 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 21:16:47.415177    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 21:16:47.415188    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	I0718 21:16:47.451898    6638 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 21:16:47.453273    6638 info.go:137] Remote host: Buildroot 2021.02.12
	I0718 21:16:47.453281    6638 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 21:16:47.453359    6638 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 21:16:47.453452    6638 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 21:16:47.453559    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 21:16:47.456345    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 21:16:47.463700    6638 start.go:296] duration metric: took 48.583042ms for postStartSetup
	I0718 21:16:47.463713    6638 fix.go:56] duration metric: took 21.057385959s for fixHost
	I0718 21:16:47.463749    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:47.463855    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:47.463862    6638 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 21:16:47.526485    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362607.314287088
	
	I0718 21:16:47.526493    6638 fix.go:216] guest clock: 1721362607.314287088
	I0718 21:16:47.526498    6638 fix.go:229] Guest: 2024-07-18 21:16:47.314287088 -0700 PDT Remote: 2024-07-18 21:16:47.463715 -0700 PDT m=+21.175778335 (delta=-149.427912ms)
	I0718 21:16:47.526508    6638 fix.go:200] guest clock delta is within tolerance: -149.427912ms
	I0718 21:16:47.526512    6638 start.go:83] releasing machines lock for "stopped-upgrade-465000", held for 21.120194208s
	I0718 21:16:47.526572    6638 ssh_runner.go:195] Run: cat /version.json
	I0718 21:16:47.526583    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	I0718 21:16:47.526572    6638 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 21:16:47.526614    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	W0718 21:16:47.527192    6638 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50500: connect: connection refused
	I0718 21:16:47.527214    6638 retry.go:31] will retry after 325.508ms: dial tcp [::1]:50500: connect: connection refused
	W0718 21:16:47.558128    6638 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0718 21:16:47.558179    6638 ssh_runner.go:195] Run: systemctl --version
	I0718 21:16:47.559910    6638 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0718 21:16:47.561606    6638 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 21:16:47.561629    6638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0718 21:16:47.564548    6638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0718 21:16:47.569373    6638 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 21:16:47.569383    6638 start.go:495] detecting cgroup driver to use...
	I0718 21:16:47.569457    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:16:47.576104    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0718 21:16:47.580139    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 21:16:47.583596    6638 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 21:16:47.583624    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 21:16:47.586673    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:16:47.589493    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 21:16:47.592613    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:16:47.596096    6638 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 21:16:47.599409    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 21:16:47.602279    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 21:16:47.605046    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 21:16:47.608469    6638 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 21:16:47.611578    6638 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 21:16:47.614397    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:47.685016    6638 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 21:16:47.691741    6638 start.go:495] detecting cgroup driver to use...
	I0718 21:16:47.691806    6638 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 21:16:47.700578    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:16:47.705266    6638 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 21:16:47.711569    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:16:47.716038    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:16:47.720674    6638 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 21:16:47.760840    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:16:47.766006    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:16:47.771350    6638 ssh_runner.go:195] Run: which cri-dockerd
	I0718 21:16:47.772606    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 21:16:47.775548    6638 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 21:16:47.780611    6638 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 21:16:47.845211    6638 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 21:16:47.909576    6638 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 21:16:47.909636    6638 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 21:16:47.914717    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:47.976509    6638 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 21:16:49.106384    6638 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.129891916s)
	I0718 21:16:49.106438    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 21:16:49.111243    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 21:16:49.115593    6638 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 21:16:49.180865    6638 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 21:16:49.245892    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:49.308735    6638 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 21:16:49.314828    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 21:16:49.319106    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:49.382797    6638 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 21:16:49.422211    6638 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 21:16:49.422293    6638 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 21:16:49.425057    6638 start.go:563] Will wait 60s for crictl version
	I0718 21:16:49.425107    6638 ssh_runner.go:195] Run: which crictl
	I0718 21:16:49.426475    6638 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 21:16:49.440389    6638 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0718 21:16:49.440459    6638 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 21:16:49.456097    6638 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 21:16:50.478366    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:50.478547    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:50.501736    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:50.501816    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:50.513538    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:50.513614    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:50.525481    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:50.525562    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:50.537277    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:50.537360    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:50.549372    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:50.549446    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:50.561132    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:50.561208    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:50.572479    6499 logs.go:276] 0 containers: []
	W0718 21:16:50.572492    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:50.572557    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:50.586092    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:50.586126    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:50.586132    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:50.625414    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:50.625433    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:50.644546    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:50.644563    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:50.662683    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:50.662696    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:50.675255    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:50.675267    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:50.688083    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:50.688096    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:50.707450    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:50.707467    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:50.729161    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:50.729187    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:50.746621    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:50.746635    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:50.763794    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:50.763807    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:50.777479    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:50.777491    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:50.789933    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:50.789946    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:50.794505    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:50.794518    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:50.838676    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:50.838689    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:50.854665    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:50.854684    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:50.867883    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:50.867895    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:50.894790    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:50.894812    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:49.478584    6638 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0718 21:16:49.478647    6638 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0718 21:16:49.479985    6638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 21:16:49.484037    6638 kubeadm.go:883] updating cluster {Name:stopped-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50535 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0718 21:16:49.484088    6638 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0718 21:16:49.484129    6638 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 21:16:49.494427    6638 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 21:16:49.494436    6638 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0718 21:16:49.494483    6638 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 21:16:49.497466    6638 ssh_runner.go:195] Run: which lz4
	I0718 21:16:49.498810    6638 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0718 21:16:49.500097    6638 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0718 21:16:49.500107    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0718 21:16:50.381083    6638 docker.go:649] duration metric: took 882.324792ms to copy over tarball
	I0718 21:16:50.381146    6638 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0718 21:16:53.410823    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:16:51.544903    6638 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.163778s)
	I0718 21:16:51.544918    6638 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0718 21:16:51.560813    6638 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 21:16:51.564311    6638 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0718 21:16:51.569483    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:51.636377    6638 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 21:16:53.376435    6638 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.740088916s)
	I0718 21:16:53.376541    6638 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 21:16:53.398677    6638 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 21:16:53.398686    6638 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0718 21:16:53.398691    6638 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0718 21:16:53.403505    6638 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:16:53.405605    6638 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:16:53.407648    6638 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0718 21:16:53.407678    6638 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:16:53.416562    6638 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:16:53.418175    6638 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:16:53.418195    6638 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:16:53.418268    6638 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0718 21:16:53.419314    6638 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:16:53.419730    6638 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:16:53.420815    6638 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:16:53.420846    6638 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0718 21:16:53.421836    6638 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:16:53.422800    6638 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:16:53.423724    6638 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0718 21:16:53.423766    6638 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:16:53.839486    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0718 21:16:53.848667    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:16:53.851004    6638 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0718 21:16:53.851036    6638 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0718 21:16:53.851075    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0718 21:16:53.860648    6638 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0718 21:16:53.860672    6638 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:16:53.860733    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:16:53.864373    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0718 21:16:53.864484    6638 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0718 21:16:53.871350    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:16:53.878485    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0718 21:16:53.878522    6638 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0718 21:16:53.878541    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0718 21:16:53.883394    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:16:53.886159    6638 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0718 21:16:53.886170    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0718 21:16:53.896885    6638 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0718 21:16:53.896904    6638 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:16:53.896961    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:16:53.899048    6638 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0718 21:16:53.899066    6638 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:16:53.899106    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0718 21:16:53.915304    6638 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0718 21:16:53.915432    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:16:53.933603    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:16:53.934656    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0718 21:16:53.936179    6638 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0718 21:16:53.936213    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0718 21:16:53.936228    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0718 21:16:53.941494    6638 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0718 21:16:53.941515    6638 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:16:53.941570    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:16:53.964223    6638 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0718 21:16:53.964243    6638 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:16:53.964304    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:16:53.964593    6638 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0718 21:16:53.964605    6638 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0718 21:16:53.964628    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0718 21:16:53.964646    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0718 21:16:53.964734    6638 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0718 21:16:53.974611    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0718 21:16:53.977635    6638 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0718 21:16:53.977652    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0718 21:16:53.977733    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0718 21:16:53.977833    6638 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0718 21:16:53.979620    6638 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0718 21:16:53.979640    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0718 21:16:54.027338    6638 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0718 21:16:54.027447    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:16:54.044360    6638 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0718 21:16:54.044374    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0718 21:16:54.059296    6638 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0718 21:16:54.059321    6638 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:16:54.059384    6638 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:16:54.131090    6638 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0718 21:16:54.131110    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0718 21:16:54.131221    6638 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0718 21:16:54.143564    6638 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0718 21:16:54.143597    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0718 21:16:54.209879    6638 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0718 21:16:54.209894    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0718 21:16:54.549218    6638 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0718 21:16:54.549240    6638 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0718 21:16:54.549246    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0718 21:16:54.694873    6638 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0718 21:16:54.694910    6638 cache_images.go:92] duration metric: took 1.296250833s to LoadCachedImages
	W0718 21:16:54.694953    6638 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0718 21:16:54.694959    6638 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0718 21:16:54.695015    6638 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-465000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 21:16:54.695086    6638 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 21:16:54.710465    6638 cni.go:84] Creating CNI manager for ""
	I0718 21:16:54.710481    6638 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:16:54.710487    6638 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0718 21:16:54.710496    6638 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-465000 NodeName:stopped-upgrade-465000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 21:16:54.710571    6638 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-465000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 21:16:54.710641    6638 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0718 21:16:54.713688    6638 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 21:16:54.713741    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0718 21:16:54.716696    6638 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0718 21:16:54.723065    6638 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 21:16:54.729113    6638 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0718 21:16:54.735407    6638 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0718 21:16:54.736829    6638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 21:16:54.740864    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:54.796831    6638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 21:16:54.806186    6638 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000 for IP: 10.0.2.15
	I0718 21:16:54.806197    6638 certs.go:194] generating shared ca certs ...
	I0718 21:16:54.806207    6638 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:16:54.806384    6638 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 21:16:54.806424    6638 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 21:16:54.806429    6638 certs.go:256] generating profile certs ...
	I0718 21:16:54.806496    6638 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/client.key
	I0718 21:16:54.806521    6638 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key.37665e56
	I0718 21:16:54.806542    6638 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt.37665e56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0718 21:16:55.173763    6638 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt.37665e56 ...
	I0718 21:16:55.173780    6638 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt.37665e56: {Name:mka167769f81b4d9e2e558c8fdd5ced3a7d6c8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:16:55.174066    6638 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key.37665e56 ...
	I0718 21:16:55.174071    6638 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key.37665e56: {Name:mkcf6bc32bd8f1298ab3848ad38b38515e044eff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:16:55.174223    6638 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt.37665e56 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt
	I0718 21:16:55.174367    6638 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key.37665e56 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key
	I0718 21:16:55.174516    6638 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/proxy-client.key
	I0718 21:16:55.174703    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 21:16:55.174733    6638 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 21:16:55.174742    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 21:16:55.174769    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 21:16:55.174796    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 21:16:55.174822    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 21:16:55.174879    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 21:16:55.175258    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 21:16:55.182681    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 21:16:55.190227    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 21:16:55.197106    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 21:16:55.204521    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0718 21:16:55.211634    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 21:16:55.219111    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 21:16:55.226230    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0718 21:16:55.232860    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 21:16:55.239962    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 21:16:55.247070    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 21:16:55.253663    6638 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 21:16:55.258592    6638 ssh_runner.go:195] Run: openssl version
	I0718 21:16:55.260283    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 21:16:55.263498    6638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:16:55.265028    6638 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:16:55.265048    6638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:16:55.266748    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 21:16:55.269494    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 21:16:55.272765    6638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 21:16:55.274050    6638 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 21:16:55.274079    6638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 21:16:55.275698    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 21:16:55.278581    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 21:16:55.281397    6638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 21:16:55.282800    6638 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 21:16:55.282820    6638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 21:16:55.284489    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 21:16:55.287996    6638 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 21:16:55.289446    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0718 21:16:55.291920    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0718 21:16:55.293929    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0718 21:16:55.295813    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0718 21:16:55.297620    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0718 21:16:55.299318    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0718 21:16:55.301204    6638 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50535 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0718 21:16:55.301275    6638 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 21:16:55.314469    6638 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 21:16:55.317597    6638 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0718 21:16:55.317604    6638 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0718 21:16:55.317627    6638 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0718 21:16:55.320674    6638 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0718 21:16:55.320963    6638 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-465000" does not appear in /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:16:55.321062    6638 kubeconfig.go:62] /Users/jenkins/minikube-integration/19302-1213/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-465000" cluster setting kubeconfig missing "stopped-upgrade-465000" context setting]
	I0718 21:16:55.321262    6638 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:16:55.321692    6638 kapi.go:59] client config for stopped-upgrade-465000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101c0f790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 21:16:55.322001    6638 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0718 21:16:55.324627    6638 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-465000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0718 21:16:55.324637    6638 kubeadm.go:1160] stopping kube-system containers ...
	I0718 21:16:55.324679    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 21:16:55.335291    6638 docker.go:483] Stopping containers: [8dfb9b191dbc af09e6d0a161 727d33ccdf8e 356bfe220705 874999ffa41b cd477da80381 97155289b259 ccbbd707a9a3]
	I0718 21:16:55.335374    6638 ssh_runner.go:195] Run: docker stop 8dfb9b191dbc af09e6d0a161 727d33ccdf8e 356bfe220705 874999ffa41b cd477da80381 97155289b259 ccbbd707a9a3
	I0718 21:16:55.345831    6638 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0718 21:16:55.351560    6638 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 21:16:55.354371    6638 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 21:16:55.354378    6638 kubeadm.go:157] found existing configuration files:
	
	I0718 21:16:55.354408    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/admin.conf
	I0718 21:16:55.356875    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 21:16:55.356907    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 21:16:55.359936    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/kubelet.conf
	I0718 21:16:55.362843    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 21:16:55.362865    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 21:16:55.365423    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/controller-manager.conf
	I0718 21:16:55.368286    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 21:16:55.368311    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 21:16:55.371541    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/scheduler.conf
	I0718 21:16:55.374282    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 21:16:55.374313    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 21:16:55.376723    6638 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 21:16:55.380083    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:16:55.403544    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:16:55.910111    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:16:56.025319    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:16:56.045963    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:16:56.066690    6638 api_server.go:52] waiting for apiserver process to appear ...
	I0718 21:16:56.066764    6638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:16:58.412850    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:16:58.413098    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:16:58.435373    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:16:58.435492    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:16:58.450234    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:16:58.450312    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:16:58.462265    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:16:58.462340    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:16:58.473385    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:16:58.473456    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:16:58.483721    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:16:58.483795    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:16:58.494111    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:16:58.494175    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:16:58.504123    6499 logs.go:276] 0 containers: []
	W0718 21:16:58.504132    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:16:58.504183    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:16:58.514523    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:16:58.514543    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:16:58.514548    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:16:58.531282    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:16:58.531292    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:16:58.546393    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:16:58.546404    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:16:58.557738    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:16:58.557749    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:16:58.569083    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:16:58.569093    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:16:58.579910    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:16:58.579924    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:16:58.602272    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:16:58.602279    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:16:58.637091    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:16:58.637098    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:16:58.650408    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:16:58.650418    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:16:58.662277    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:16:58.662288    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:16:58.696646    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:16:58.696660    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:16:58.713793    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:16:58.713804    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:16:58.725568    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:16:58.725578    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:16:58.730268    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:16:58.730276    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:16:58.748514    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:16:58.748528    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:16:58.762066    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:16:58.762075    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:16:58.776160    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:16:58.776170    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:16:56.568888    6638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:16:57.068814    6638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:16:57.073911    6638 api_server.go:72] duration metric: took 1.007248625s to wait for apiserver process to appear ...
	I0718 21:16:57.073922    6638 api_server.go:88] waiting for apiserver healthz status ...
	I0718 21:16:57.073931    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:01.292819    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:02.075936    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:02.075978    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:06.294939    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:06.295087    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:06.307302    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:17:06.307372    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:06.317598    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:17:06.317663    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:06.328444    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:17:06.328515    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:06.338916    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:17:06.338984    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:06.349414    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:17:06.349475    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:06.360099    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:17:06.360164    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:06.370328    6499 logs.go:276] 0 containers: []
	W0718 21:17:06.370343    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:06.370405    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:06.382171    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:17:06.382188    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:17:06.382193    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:06.395176    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:06.395188    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:06.434432    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:06.434440    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:06.439374    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:06.439384    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:06.474748    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:17:06.474760    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:17:06.488915    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:17:06.488924    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:17:06.500164    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:17:06.500176    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:17:06.511846    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:17:06.511857    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:17:06.523792    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:06.523802    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:06.548229    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:17:06.548236    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:17:06.565367    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:17:06.565377    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:17:06.578611    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:17:06.578624    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:17:06.594153    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:17:06.594163    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:17:06.608803    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:17:06.608812    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:17:06.623280    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:17:06.623299    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:17:06.641321    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:17:06.641332    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:17:06.654659    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:17:06.654668    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:17:09.168067    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:07.076073    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:07.076121    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:14.170311    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:14.170485    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:14.182504    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:17:14.182580    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:14.193529    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:17:14.193609    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:14.204133    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:17:14.204204    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:14.215215    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:17:14.215291    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:14.230233    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:17:14.230302    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:14.240947    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:17:14.241018    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:14.251654    6499 logs.go:276] 0 containers: []
	W0718 21:17:14.251668    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:14.251725    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:14.262420    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:17:14.262438    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:14.262444    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:14.298017    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:14.298026    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:14.337500    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:17:14.337512    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:17:14.350111    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:17:14.350124    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:17:14.362731    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:14.362744    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:14.387838    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:14.387865    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:14.392901    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:17:14.392913    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:17:14.408529    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:17:14.408541    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:17:14.420611    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:17:14.420624    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:17:14.439298    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:17:14.439318    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:17:14.455516    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:17:14.455528    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:17:14.472126    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:17:14.472141    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:17:14.486021    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:17:14.486032    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:17:14.499975    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:17:14.499988    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:17:14.513366    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:17:14.513379    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:14.525812    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:17:14.525825    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:17:14.541348    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:17:14.541365    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:17:12.076649    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:12.076698    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:17.059707    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:17.077167    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:17.077186    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:22.061872    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:22.062055    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:22.081915    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:17:22.082002    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:22.097120    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:17:22.097196    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:22.109754    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:17:22.109821    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:22.121045    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:17:22.121126    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:22.131206    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:17:22.131272    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:22.145715    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:17:22.145786    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:22.155668    6499 logs.go:276] 0 containers: []
	W0718 21:17:22.155679    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:22.155730    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:22.166627    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:17:22.166642    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:17:22.166648    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:17:22.181340    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:17:22.181350    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:17:22.192678    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:22.192689    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:22.216090    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:22.216097    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:22.252895    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:17:22.252903    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:17:22.264404    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:17:22.264418    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:17:22.278263    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:17:22.278273    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:17:22.290040    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:17:22.290050    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:17:22.306690    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:17:22.306701    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:17:22.318177    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:17:22.318186    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:17:22.329709    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:17:22.329718    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:22.349181    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:22.349192    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:22.353785    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:22.353792    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:22.392988    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:17:22.393001    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:17:22.407029    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:17:22.407040    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:17:22.418114    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:17:22.418126    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:17:22.432153    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:17:22.432167    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:17:24.951057    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:22.077676    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:22.077707    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:29.953286    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:29.953455    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:29.964727    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:17:29.964809    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:29.975922    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:17:29.975996    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:29.986472    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:17:29.986543    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:29.997053    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:17:29.997124    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:30.008788    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:17:30.008860    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:30.021003    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:17:30.021079    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:30.033441    6499 logs.go:276] 0 containers: []
	W0718 21:17:30.033453    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:30.033518    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:30.045832    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:17:30.045852    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:17:30.045857    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:17:30.060371    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:17:30.060384    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:17:30.079847    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:17:30.079861    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:30.091453    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:17:30.091469    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:17:30.103058    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:30.103068    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:30.137496    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:17:30.137508    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:17:30.149392    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:17:30.149405    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:17:30.164407    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:17:30.164418    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:17:30.177046    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:17:30.177059    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:17:30.188100    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:30.188109    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:30.211551    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:30.211559    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:30.215933    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:17:30.215940    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:17:30.229249    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:17:30.229260    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:17:30.240653    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:17:30.240665    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:17:30.257609    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:30.257620    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:30.292942    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:17:30.292948    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:17:30.306902    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:17:30.306915    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:17:27.078443    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:27.078491    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:32.822549    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:32.079486    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:32.079535    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:37.824839    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:37.825189    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:37.856104    6499 logs.go:276] 2 containers: [5ec273d0260a 90d9e9c55b43]
	I0718 21:17:37.856208    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:37.873077    6499 logs.go:276] 2 containers: [acdbe81f8dae 74d5e32e27cc]
	I0718 21:17:37.873162    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:37.886797    6499 logs.go:276] 1 containers: [137346ad3310]
	I0718 21:17:37.886862    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:37.898235    6499 logs.go:276] 2 containers: [1b2caa2b5191 bc0934f9b595]
	I0718 21:17:37.898297    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:37.908192    6499 logs.go:276] 1 containers: [4ccc709879ef]
	I0718 21:17:37.908276    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:37.918784    6499 logs.go:276] 2 containers: [7f7cbd7dbf6f 4d09a98168ad]
	I0718 21:17:37.918846    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:37.929726    6499 logs.go:276] 0 containers: []
	W0718 21:17:37.929740    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:37.929801    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:37.940620    6499 logs.go:276] 2 containers: [1006d4acf585 c9405bf4762f]
	I0718 21:17:37.940638    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:37.940642    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:37.976507    6499 logs.go:123] Gathering logs for kube-scheduler [1b2caa2b5191] ...
	I0718 21:17:37.976518    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b2caa2b5191"
	I0718 21:17:37.988154    6499 logs.go:123] Gathering logs for kube-controller-manager [4d09a98168ad] ...
	I0718 21:17:37.988166    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d09a98168ad"
	I0718 21:17:37.999775    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:37.999802    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:38.010449    6499 logs.go:123] Gathering logs for etcd [acdbe81f8dae] ...
	I0718 21:17:38.010460    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acdbe81f8dae"
	I0718 21:17:38.024940    6499 logs.go:123] Gathering logs for storage-provisioner [1006d4acf585] ...
	I0718 21:17:38.024950    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1006d4acf585"
	I0718 21:17:38.036416    6499 logs.go:123] Gathering logs for storage-provisioner [c9405bf4762f] ...
	I0718 21:17:38.036428    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9405bf4762f"
	I0718 21:17:38.047757    6499 logs.go:123] Gathering logs for kube-proxy [4ccc709879ef] ...
	I0718 21:17:38.047769    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ccc709879ef"
	I0718 21:17:38.059374    6499 logs.go:123] Gathering logs for kube-controller-manager [7f7cbd7dbf6f] ...
	I0718 21:17:38.059384    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f7cbd7dbf6f"
	I0718 21:17:38.077361    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:38.077372    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:38.101081    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:38.101091    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:38.138020    6499 logs.go:123] Gathering logs for kube-apiserver [5ec273d0260a] ...
	I0718 21:17:38.138030    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec273d0260a"
	I0718 21:17:38.152226    6499 logs.go:123] Gathering logs for kube-apiserver [90d9e9c55b43] ...
	I0718 21:17:38.152239    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90d9e9c55b43"
	I0718 21:17:38.165701    6499 logs.go:123] Gathering logs for coredns [137346ad3310] ...
	I0718 21:17:38.165711    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 137346ad3310"
	I0718 21:17:38.176681    6499 logs.go:123] Gathering logs for kube-scheduler [bc0934f9b595] ...
	I0718 21:17:38.176696    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc0934f9b595"
	I0718 21:17:38.191216    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:17:38.191225    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:38.203095    6499 logs.go:123] Gathering logs for etcd [74d5e32e27cc] ...
	I0718 21:17:38.203106    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d5e32e27cc"
	I0718 21:17:40.719445    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:37.080772    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:37.080809    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:45.721750    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:45.721881    6499 kubeadm.go:597] duration metric: took 4m3.895455209s to restartPrimaryControlPlane
	W0718 21:17:45.721984    6499 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0718 21:17:45.722027    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0718 21:17:42.081016    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:42.081039    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:46.709525    6499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 21:17:46.714449    6499 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 21:17:46.717490    6499 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 21:17:46.720076    6499 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 21:17:46.720082    6499 kubeadm.go:157] found existing configuration files:
	
	I0718 21:17:46.720103    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/admin.conf
	I0718 21:17:46.722707    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 21:17:46.722728    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 21:17:46.725726    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/kubelet.conf
	I0718 21:17:46.728250    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 21:17:46.728270    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 21:17:46.731033    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/controller-manager.conf
	I0718 21:17:46.734142    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 21:17:46.734164    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 21:17:46.736751    6499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/scheduler.conf
	I0718 21:17:46.739192    6499 kubeadm.go:163] "https://control-plane.minikube.internal:50316" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50316 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 21:17:46.739211    6499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 21:17:46.742165    6499 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0718 21:17:46.758719    6499 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0718 21:17:46.758755    6499 kubeadm.go:310] [preflight] Running pre-flight checks
	I0718 21:17:46.805704    6499 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 21:17:46.805806    6499 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 21:17:46.805957    6499 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 21:17:46.856216    6499 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 21:17:46.860395    6499 out.go:204]   - Generating certificates and keys ...
	I0718 21:17:46.860512    6499 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0718 21:17:46.860604    6499 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0718 21:17:46.860646    6499 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0718 21:17:46.860707    6499 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0718 21:17:46.860775    6499 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0718 21:17:46.860800    6499 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0718 21:17:46.860831    6499 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0718 21:17:46.860924    6499 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0718 21:17:46.860970    6499 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0718 21:17:46.861008    6499 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0718 21:17:46.861030    6499 kubeadm.go:310] [certs] Using the existing "sa" key
	I0718 21:17:46.861146    6499 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 21:17:47.010302    6499 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 21:17:47.083311    6499 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 21:17:47.249513    6499 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 21:17:47.404311    6499 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 21:17:47.437240    6499 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 21:17:47.437623    6499 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 21:17:47.437677    6499 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0718 21:17:47.524123    6499 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 21:17:47.526982    6499 out.go:204]   - Booting up control plane ...
	I0718 21:17:47.527027    6499 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 21:17:47.529015    6499 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 21:17:47.529375    6499 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 21:17:47.529632    6499 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 21:17:47.530447    6499 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0718 21:17:47.082668    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:47.082692    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:52.032381    6499 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501895 seconds
	I0718 21:17:52.032470    6499 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 21:17:52.042268    6499 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 21:17:52.563288    6499 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0718 21:17:52.563613    6499 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-511000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 21:17:53.069869    6499 kubeadm.go:310] [bootstrap-token] Using token: jyjzy8.eevyqmaux8ek27ts
	I0718 21:17:53.075688    6499 out.go:204]   - Configuring RBAC rules ...
	I0718 21:17:53.075785    6499 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 21:17:53.075856    6499 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 21:17:53.082746    6499 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 21:17:53.083986    6499 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 21:17:53.085372    6499 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 21:17:53.086679    6499 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 21:17:53.091172    6499 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 21:17:53.262083    6499 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0718 21:17:53.475520    6499 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0718 21:17:53.475972    6499 kubeadm.go:310] 
	I0718 21:17:53.476009    6499 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0718 21:17:53.476017    6499 kubeadm.go:310] 
	I0718 21:17:53.476079    6499 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0718 21:17:53.476083    6499 kubeadm.go:310] 
	I0718 21:17:53.476098    6499 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0718 21:17:53.476132    6499 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 21:17:53.476160    6499 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 21:17:53.476164    6499 kubeadm.go:310] 
	I0718 21:17:53.476201    6499 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0718 21:17:53.476205    6499 kubeadm.go:310] 
	I0718 21:17:53.476228    6499 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 21:17:53.476231    6499 kubeadm.go:310] 
	I0718 21:17:53.476262    6499 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0718 21:17:53.476303    6499 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 21:17:53.476339    6499 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 21:17:53.476344    6499 kubeadm.go:310] 
	I0718 21:17:53.476397    6499 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0718 21:17:53.476448    6499 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0718 21:17:53.476451    6499 kubeadm.go:310] 
	I0718 21:17:53.476519    6499 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jyjzy8.eevyqmaux8ek27ts \
	I0718 21:17:53.476577    6499 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc \
	I0718 21:17:53.476588    6499 kubeadm.go:310] 	--control-plane 
	I0718 21:17:53.476591    6499 kubeadm.go:310] 
	I0718 21:17:53.476630    6499 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0718 21:17:53.476633    6499 kubeadm.go:310] 
	I0718 21:17:53.476679    6499 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jyjzy8.eevyqmaux8ek27ts \
	I0718 21:17:53.476745    6499 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc 
	I0718 21:17:53.476800    6499 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 21:17:53.476808    6499 cni.go:84] Creating CNI manager for ""
	I0718 21:17:53.476815    6499 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:17:53.479862    6499 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0718 21:17:53.486805    6499 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0718 21:17:53.490268    6499 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0718 21:17:53.495108    6499 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 21:17:53.495155    6499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 21:17:53.495186    6499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-511000 minikube.k8s.io/updated_at=2024_07_18T21_17_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=running-upgrade-511000 minikube.k8s.io/primary=true
	I0718 21:17:53.536798    6499 kubeadm.go:1113] duration metric: took 41.684625ms to wait for elevateKubeSystemPrivileges
	I0718 21:17:53.536807    6499 ops.go:34] apiserver oom_adj: -16
	I0718 21:17:53.536815    6499 kubeadm.go:394] duration metric: took 4m11.723757042s to StartCluster
	I0718 21:17:53.536825    6499 settings.go:142] acquiring lock: {Name:mk9577e2a46ebc5e017130011eb528f9fea1ed10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:17:53.536907    6499 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:17:53.537288    6499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:17:53.537493    6499 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:17:53.537498    6499 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 21:17:53.537537    6499 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-511000"
	I0718 21:17:53.537553    6499 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-511000"
	W0718 21:17:53.537560    6499 addons.go:243] addon storage-provisioner should already be in state true
	I0718 21:17:53.537586    6499 host.go:66] Checking if "running-upgrade-511000" exists ...
	I0718 21:17:53.537604    6499 config.go:182] Loaded profile config "running-upgrade-511000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:17:53.537592    6499 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-511000"
	I0718 21:17:53.537639    6499 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-511000"
	I0718 21:17:53.537864    6499 retry.go:31] will retry after 1.070439171s: connect: dial unix /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/running-upgrade-511000/monitor: connect: connection refused
	I0718 21:17:53.538526    6499 kapi.go:59] client config for running-upgrade-511000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/running-upgrade-511000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040ff790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 21:17:53.538659    6499 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-511000"
	W0718 21:17:53.538663    6499 addons.go:243] addon default-storageclass should already be in state true
	I0718 21:17:53.538671    6499 host.go:66] Checking if "running-upgrade-511000" exists ...
	I0718 21:17:53.539186    6499 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0718 21:17:53.539191    6499 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0718 21:17:53.539196    6499 sshutil.go:53] new ssh client: &{IP:localhost Port:50284 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/running-upgrade-511000/id_rsa Username:docker}
	I0718 21:17:53.541617    6499 out.go:177] * Verifying Kubernetes components...
	I0718 21:17:53.549802    6499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:17:53.642546    6499 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 21:17:53.647647    6499 api_server.go:52] waiting for apiserver process to appear ...
	I0718 21:17:53.647703    6499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:17:53.650805    6499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0718 21:17:53.655514    6499 api_server.go:72] duration metric: took 118.013084ms to wait for apiserver process to appear ...
	I0718 21:17:53.655523    6499 api_server.go:88] waiting for apiserver healthz status ...
	I0718 21:17:53.655531    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:54.614494    6499 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:17:54.618473    6499 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 21:17:54.618483    6499 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0718 21:17:54.618497    6499 sshutil.go:53] new ssh client: &{IP:localhost Port:50284 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/running-upgrade-511000/id_rsa Username:docker}
	I0718 21:17:54.650739    6499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 21:17:52.084755    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:52.084797    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:58.657545    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:58.657583    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:57.086998    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:57.087445    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:57.127732    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:17:57.127881    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:57.149570    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:17:57.149664    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:57.166798    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:17:57.166880    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:57.179365    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:17:57.179443    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:57.190552    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:17:57.190618    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:57.201023    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:17:57.201097    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:57.211798    6638 logs.go:276] 0 containers: []
	W0718 21:17:57.211808    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:57.211867    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:57.222489    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:17:57.222507    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:17:57.222512    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:17:57.237134    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:17:57.237146    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:17:57.252388    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:17:57.252399    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:17:57.270235    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:17:57.270246    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:17:57.282071    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:57.282082    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:57.321225    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:57.321232    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:57.432239    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:17:57.432254    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:17:57.474218    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:17:57.474236    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:17:57.494865    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:17:57.494875    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:17:57.516712    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:17:57.516722    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:17:57.528013    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:17:57.528026    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:17:57.539797    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:57.539810    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:57.544228    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:17:57.544235    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:17:57.561249    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:17:57.561260    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:17:57.572601    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:57.572611    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:57.596668    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:17:57.596676    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:57.608728    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:17:57.608738    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:00.126285    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:03.657796    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:03.657838    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:05.128474    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:05.128591    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:05.139521    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:05.139605    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:05.150923    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:05.151000    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:05.161891    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:05.161970    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:05.172291    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:05.172366    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:05.182775    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:05.182843    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:05.193782    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:05.193853    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:05.204205    6638 logs.go:276] 0 containers: []
	W0718 21:18:05.204219    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:05.204286    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:05.215253    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:05.215272    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:05.215277    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:05.232016    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:05.232027    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:05.243471    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:05.243483    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:05.254988    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:05.255001    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:05.295283    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:05.295294    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:05.335345    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:05.335355    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:05.347369    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:05.347380    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:05.365427    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:05.365443    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:05.405188    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:05.405200    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:05.420254    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:05.420269    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:05.434036    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:05.434046    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:05.448750    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:05.448762    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:05.460564    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:05.460574    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:05.477680    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:05.477689    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:05.492347    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:05.492358    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:05.516728    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:05.516738    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:05.520804    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:05.520814    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:08.658098    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:08.658119    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:08.035140    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:13.658394    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:13.658420    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:13.037223    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:13.037483    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:13.055692    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:13.055775    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:13.069284    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:13.069361    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:13.080558    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:13.080627    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:13.091022    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:13.091087    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:13.101781    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:13.101853    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:13.111985    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:13.112050    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:13.122543    6638 logs.go:276] 0 containers: []
	W0718 21:18:13.122554    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:13.122614    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:13.132584    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:13.132603    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:13.132608    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:13.147004    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:13.147017    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:13.158373    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:13.158382    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:13.182924    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:13.182930    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:13.198112    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:13.198122    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:13.213080    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:13.213089    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:13.225070    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:13.225084    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:13.263602    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:13.263617    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:13.281543    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:13.281553    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:13.321536    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:13.321547    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:13.335503    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:13.335513    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:13.347989    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:13.348002    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:13.367248    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:13.367264    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:13.385674    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:13.385689    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:13.402760    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:13.402773    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:13.407483    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:13.407493    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:13.442645    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:13.442662    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:15.956929    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:18.659252    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:18.659312    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:20.959180    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:20.959315    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:20.970800    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:20.970878    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:20.981514    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:20.981580    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:20.992052    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:20.992116    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:21.002468    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:21.002534    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:21.012767    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:21.012832    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:21.023520    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:21.023595    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:21.033861    6638 logs.go:276] 0 containers: []
	W0718 21:18:21.033875    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:21.033933    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:21.044390    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:21.044405    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:21.044410    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:21.059367    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:21.059383    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:21.099025    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:21.099036    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:21.112649    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:21.112659    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:21.124542    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:21.124556    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:21.138528    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:21.138539    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:21.149618    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:21.149630    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:21.188645    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:21.188654    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:21.192577    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:21.192585    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:21.203443    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:21.203454    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:21.217815    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:21.217827    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:21.232998    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:21.233010    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:21.258018    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:21.258026    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:21.272318    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:21.272328    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:21.289437    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:21.289452    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:21.300685    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:21.300699    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:21.314427    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:21.314438    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:23.660429    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:23.660474    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0718 21:18:23.964223    6499 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0718 21:18:23.969606    6499 out.go:177] * Enabled addons: storage-provisioner
	I0718 21:18:23.977407    6499 addons.go:510] duration metric: took 30.440787s for enable addons: enabled=[storage-provisioner]
	I0718 21:18:23.855040    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:28.661674    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:28.661713    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:28.857228    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:28.857422    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:28.878622    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:28.878727    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:28.892847    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:28.892925    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:28.905182    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:28.905250    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:28.919238    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:28.919308    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:28.929570    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:28.929645    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:28.946697    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:28.946773    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:28.956484    6638 logs.go:276] 0 containers: []
	W0718 21:18:28.956497    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:28.956548    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:28.966963    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:28.966980    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:28.966986    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:29.001995    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:29.002005    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:29.026132    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:29.026142    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:29.064651    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:29.064662    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:29.078736    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:29.078745    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:29.092972    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:29.092982    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:29.104935    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:29.104945    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:29.117615    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:29.117628    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:29.135530    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:29.135542    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:29.146802    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:29.146813    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:29.163199    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:29.163209    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:29.178081    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:29.178092    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:29.216930    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:29.216938    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:29.221457    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:29.221467    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:29.233845    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:29.233856    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:29.248583    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:29.248593    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:29.271633    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:29.271640    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:33.663193    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:33.663233    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:31.785136    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:38.665427    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:38.665461    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:36.786435    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:36.786651    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:36.811186    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:36.811299    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:36.827281    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:36.827359    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:36.840698    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:36.840767    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:36.852117    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:36.852190    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:36.863528    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:36.863588    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:36.874172    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:36.874237    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:36.889705    6638 logs.go:276] 0 containers: []
	W0718 21:18:36.889717    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:36.889772    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:36.900639    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:36.900658    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:36.900663    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:36.915397    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:36.915406    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:36.926626    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:36.926637    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:36.949824    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:36.949832    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:36.986068    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:36.986075    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:36.999934    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:36.999944    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:37.011502    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:37.011512    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:37.022703    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:37.022713    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:37.040360    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:37.040370    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:37.075274    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:37.075287    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:37.089931    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:37.089942    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:37.103553    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:37.103561    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:37.118133    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:37.118142    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:37.129894    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:37.129908    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:37.133903    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:37.133910    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:37.172583    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:37.172597    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:37.187588    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:37.187603    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:39.700643    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:43.667588    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:43.667612    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:44.702850    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:44.703003    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:44.720774    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:44.720862    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:44.734293    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:44.734370    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:44.745337    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:44.745397    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:44.755822    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:44.755886    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:44.766272    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:44.766329    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:44.776775    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:44.776860    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:44.790095    6638 logs.go:276] 0 containers: []
	W0718 21:18:44.790104    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:44.790156    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:44.802424    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:44.802440    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:44.802445    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:44.818325    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:44.818362    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:44.829547    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:44.829558    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:44.840999    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:44.841009    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:44.855569    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:44.855578    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:44.869508    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:44.869518    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:44.883931    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:44.883945    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:44.895963    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:44.895979    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:44.908127    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:44.908139    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:44.912584    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:44.912591    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:44.946075    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:44.946090    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:44.963467    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:44.963478    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:44.986612    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:44.986619    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:44.998299    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:44.998309    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:45.037009    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:45.037017    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:45.075386    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:45.075398    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:45.091999    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:45.092011    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:48.669696    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:48.669751    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:47.615406    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:53.671828    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:53.671916    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:53.682300    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:18:53.682372    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:53.693485    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:18:53.693556    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:53.704118    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:18:53.704188    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:53.715355    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:18:53.715427    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:53.731475    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:18:53.731562    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:53.742147    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:18:53.742214    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:53.752462    6499 logs.go:276] 0 containers: []
	W0718 21:18:53.752475    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:53.752539    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:53.762604    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:18:53.762616    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:53.762622    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:53.797786    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:53.797798    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:53.802755    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:18:53.802762    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:18:53.816919    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:18:53.816929    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:18:53.829068    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:18:53.829087    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:18:53.840666    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:18:53.840677    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:18:53.852392    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:18:53.852402    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:18:53.869519    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:53.869530    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:53.908030    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:18:53.908044    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:18:53.922575    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:18:53.922585    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:18:53.936908    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:18:53.936918    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:18:53.948259    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:53.948272    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:53.971357    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:18:53.971364    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:52.617557    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:52.617771    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:52.644399    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:52.644493    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:52.658941    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:52.659024    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:52.671515    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:52.671579    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:52.682163    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:52.682231    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:52.693286    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:52.693349    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:52.704552    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:52.704615    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:52.716189    6638 logs.go:276] 0 containers: []
	W0718 21:18:52.716200    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:52.716250    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:52.726686    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:52.726703    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:52.726708    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:52.741469    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:52.741481    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:52.753132    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:52.753145    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:52.791935    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:52.791942    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:52.831446    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:52.831457    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:52.845325    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:52.845335    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:52.857553    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:52.857564    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:52.872003    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:52.872012    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:52.888758    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:52.888768    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:52.903339    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:52.903350    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:52.927845    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:52.927853    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:52.941513    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:52.941524    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:52.953944    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:52.953954    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:52.988055    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:52.988066    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:52.999735    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:52.999746    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:53.004350    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:53.004356    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:53.016013    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:53.016021    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:55.529421    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:56.484261    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:00.531614    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:00.531779    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:00.545805    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:00.545886    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:00.557224    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:00.557297    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:00.567619    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:00.567685    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:00.580060    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:00.580139    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:00.591200    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:00.591267    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:00.602034    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:00.602105    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:00.612056    6638 logs.go:276] 0 containers: []
	W0718 21:19:00.612067    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:00.612126    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:00.623600    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:00.623617    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:00.623623    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:00.637629    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:00.637639    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:00.652190    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:00.652200    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:00.667487    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:00.667497    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:00.692631    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:00.692648    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:00.733092    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:00.733105    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:00.737292    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:00.737298    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:00.748698    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:00.748709    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:00.760494    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:00.760506    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:00.774342    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:00.774352    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:00.817707    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:00.817718    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:00.830815    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:00.830826    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:00.842442    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:00.842453    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:00.854760    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:00.854770    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:00.890237    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:00.890247    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:00.930396    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:00.930407    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:00.944580    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:00.944591    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:01.486474    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:01.486703    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:01.506839    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:01.506924    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:01.521821    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:01.521898    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:01.533841    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:01.533909    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:01.545304    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:01.545385    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:01.556195    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:01.556263    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:01.567321    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:01.567384    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:01.583295    6499 logs.go:276] 0 containers: []
	W0718 21:19:01.583311    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:01.583372    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:01.594636    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:01.594653    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:01.594659    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:01.629617    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:01.629626    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:01.645494    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:01.645505    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:01.657468    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:01.657479    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:01.672739    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:01.672750    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:01.687865    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:01.687875    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:01.706660    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:01.706676    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:01.724432    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:01.724442    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:01.729145    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:01.729151    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:01.763808    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:01.763818    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:01.782489    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:01.782498    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:01.796439    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:01.796451    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:01.820984    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:01.820994    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:04.334670    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:03.465386    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:09.337002    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:09.337323    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:09.372047    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:09.372176    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:09.391269    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:09.391378    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:09.406051    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:09.406125    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:09.418353    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:09.418428    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:09.429269    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:09.429344    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:09.441206    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:09.441276    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:09.453948    6499 logs.go:276] 0 containers: []
	W0718 21:19:09.453960    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:09.454021    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:09.464469    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:09.464484    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:09.464489    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:09.476388    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:09.476399    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:09.481030    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:09.481037    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:09.524847    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:09.524857    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:09.539367    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:09.539378    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:09.551161    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:09.551174    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:09.565993    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:09.566003    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:09.578252    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:09.578264    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:09.613544    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:09.613551    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:09.627312    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:09.627324    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:09.641196    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:09.641214    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:09.653636    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:09.653647    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:09.671943    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:09.671953    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:08.467587    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:08.467809    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:08.484504    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:08.484590    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:08.497395    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:08.497462    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:08.508798    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:08.508867    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:08.519361    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:08.519428    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:08.529955    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:08.530018    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:08.541054    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:08.541114    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:08.551474    6638 logs.go:276] 0 containers: []
	W0718 21:19:08.551486    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:08.551538    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:08.562597    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:08.562618    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:08.562623    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:08.582969    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:08.582980    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:08.597504    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:08.597516    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:08.609125    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:08.609135    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:08.620168    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:08.620179    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:08.634934    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:08.634946    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:08.659865    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:08.659875    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:08.674514    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:08.674527    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:08.693053    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:08.693062    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:08.705525    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:08.705540    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:08.716772    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:08.716782    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:08.740173    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:08.740185    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:08.754268    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:08.754280    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:08.792385    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:08.792398    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:08.807412    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:08.807423    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:08.843412    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:08.843425    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:08.848226    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:08.848236    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:12.196434    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:11.388783    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:17.198699    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:17.198921    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:17.223860    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:17.223980    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:17.240630    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:17.240719    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:17.254385    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:17.254454    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:17.265280    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:17.265344    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:17.276500    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:17.276575    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:17.286941    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:17.287012    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:17.297538    6499 logs.go:276] 0 containers: []
	W0718 21:19:17.297554    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:17.297610    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:17.308057    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:17.308073    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:17.308083    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:17.322424    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:17.322436    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:17.333938    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:17.333950    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:17.352010    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:17.352023    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:17.363085    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:17.363095    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:17.386515    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:17.386522    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:17.399553    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:17.399566    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:17.434414    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:17.434421    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:17.468789    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:17.468803    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:17.487468    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:17.487478    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:17.499203    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:17.499216    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:17.514292    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:17.514301    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:17.525758    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:17.525769    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:20.032156    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:16.390999    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:16.391516    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:16.428975    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:16.429122    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:16.448922    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:16.449029    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:16.468938    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:16.469019    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:16.480782    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:16.480855    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:16.491402    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:16.491467    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:16.503162    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:16.503230    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:16.517866    6638 logs.go:276] 0 containers: []
	W0718 21:19:16.517880    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:16.517936    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:16.528419    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:16.528438    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:16.528444    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:16.565743    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:16.565755    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:16.601041    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:16.601053    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:16.620087    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:16.620101    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:16.631375    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:16.631390    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:16.654928    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:16.654937    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:16.666735    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:16.666748    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:16.684359    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:16.684369    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:16.697639    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:16.697650    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:16.712238    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:16.712250    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:16.751045    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:16.751055    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:16.761952    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:16.761962    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:16.765981    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:16.765990    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:16.780717    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:16.780726    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:16.793284    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:16.793295    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:16.807927    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:16.807940    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:16.822615    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:16.822625    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:19.335889    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:25.034411    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:25.034545    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:25.048298    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:25.048377    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:25.060536    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:25.060607    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:25.072220    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:25.072290    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:25.083141    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:25.083208    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:25.093685    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:25.093754    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:25.104141    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:25.104213    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:25.113885    6499 logs.go:276] 0 containers: []
	W0718 21:19:25.113895    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:25.113950    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:25.124093    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:25.124107    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:25.124113    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:25.135421    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:25.135431    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:25.150639    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:25.150650    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:25.169134    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:25.169146    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:25.181479    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:25.181490    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:25.186315    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:25.186321    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:25.255378    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:25.255389    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:25.269532    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:25.269542    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:25.283323    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:25.283335    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:25.297899    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:25.297911    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:25.309589    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:25.309600    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:25.334112    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:25.334124    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:25.367948    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:25.367959    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:24.338283    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:24.338684    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:24.372132    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:24.372264    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:24.390515    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:24.390614    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:24.414739    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:24.414812    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:24.430600    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:24.430681    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:24.441370    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:24.441446    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:24.452544    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:24.452622    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:24.463294    6638 logs.go:276] 0 containers: []
	W0718 21:19:24.463306    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:24.463359    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:24.473774    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:24.473793    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:24.473798    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:24.496489    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:24.496497    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:24.508036    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:24.508049    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:24.519466    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:24.519477    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:24.531021    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:24.531031    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:24.542476    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:24.542489    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:24.556186    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:24.556196    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:24.573428    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:24.573437    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:24.584598    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:24.584612    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:24.621828    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:24.621842    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:24.633306    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:24.633319    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:24.648018    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:24.648029    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:24.685822    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:24.685837    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:24.720063    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:24.720075    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:24.735420    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:24.735434    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:24.740294    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:24.740302    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:24.755330    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:24.755343    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:27.882054    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:27.271548    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:32.884186    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:32.884300    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:32.895896    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:32.895971    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:32.906265    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:32.906332    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:32.917047    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:32.917117    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:32.927270    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:32.927352    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:32.937743    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:32.937811    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:32.948284    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:32.948351    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:32.958329    6499 logs.go:276] 0 containers: []
	W0718 21:19:32.958341    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:32.958403    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:32.972104    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:32.972120    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:32.972129    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:33.009972    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:33.009983    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:33.027963    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:33.027973    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:33.039340    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:33.039350    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:33.051223    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:33.051233    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:33.068621    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:33.068631    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:33.079941    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:33.079951    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:33.114656    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:33.114664    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:33.118809    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:33.118817    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:33.137372    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:33.137381    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:33.149202    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:33.149214    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:33.160559    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:33.160572    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:33.174829    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:33.174842    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:35.699870    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:32.274018    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:32.274252    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:32.296560    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:32.296659    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:32.312532    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:32.312604    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:32.325359    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:32.325438    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:32.336903    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:32.336976    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:32.347253    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:32.347325    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:32.358006    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:32.358069    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:32.368424    6638 logs.go:276] 0 containers: []
	W0718 21:19:32.368437    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:32.368487    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:32.378670    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:32.378689    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:32.378694    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:32.382990    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:32.382999    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:32.420953    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:32.420964    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:32.436123    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:32.436134    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:32.453968    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:32.453982    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:32.466195    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:32.466208    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:32.491127    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:32.491143    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:32.506605    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:32.506617    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:32.527848    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:32.527861    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:32.539930    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:32.539942    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:32.579469    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:32.579478    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:32.593923    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:32.593936    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:32.630259    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:32.630271    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:32.646084    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:32.646093    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:32.657940    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:32.657951    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:32.669213    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:32.669223    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:32.682496    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:32.682513    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:35.195859    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:40.702280    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:40.702431    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:40.713929    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:40.713999    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:40.724774    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:40.724850    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:40.739386    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:40.739457    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:40.757320    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:40.757394    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:40.768310    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:40.768384    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:40.778994    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:40.779062    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:40.789159    6499 logs.go:276] 0 containers: []
	W0718 21:19:40.789172    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:40.789231    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:40.800011    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:40.800026    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:40.800031    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:40.814865    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:40.814876    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:40.826828    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:40.826839    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:40.845187    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:40.845199    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:40.870417    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:40.870424    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:40.903468    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:40.903478    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:40.918389    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:40.918400    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:40.932151    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:40.932162    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:40.947820    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:40.947831    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:40.959522    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:40.959532    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:40.198069    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:40.198534    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:40.238506    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:40.238648    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:40.259207    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:40.259302    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:40.274408    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:40.274485    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:40.286991    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:40.287062    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:40.297521    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:40.297585    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:40.308316    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:40.308381    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:40.326001    6638 logs.go:276] 0 containers: []
	W0718 21:19:40.326013    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:40.326068    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:40.336683    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:40.336703    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:40.336708    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:40.348508    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:40.348520    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:40.362842    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:40.362852    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:40.387365    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:40.387372    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:40.403214    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:40.403224    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:40.417817    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:40.417827    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:40.431567    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:40.431578    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:40.443357    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:40.443367    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:40.457008    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:40.457020    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:40.469869    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:40.469879    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:40.507881    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:40.507889    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:40.512296    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:40.512302    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:40.523570    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:40.523580    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:40.541087    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:40.541097    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:40.556535    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:40.556547    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:40.568343    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:40.568355    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:40.603778    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:40.603789    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:40.971369    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:40.971378    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:40.983134    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:40.983143    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:40.987557    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:40.987564    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:43.524315    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:43.143619    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:48.526551    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:48.526641    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:48.545525    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:48.545601    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:48.563690    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:48.563760    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:48.573969    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:48.574041    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:48.584588    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:48.584659    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:48.595589    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:48.595660    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:48.605879    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:48.605943    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:48.616037    6499 logs.go:276] 0 containers: []
	W0718 21:19:48.616047    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:48.616105    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:48.631060    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:48.631080    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:48.631087    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:48.649808    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:48.649819    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:48.664012    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:48.664021    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:48.675900    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:48.675911    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:48.687140    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:48.687150    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:48.711662    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:48.711669    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:48.723364    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:48.723377    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:48.758291    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:48.758299    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:48.762384    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:48.762390    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:48.798739    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:48.798750    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:48.813801    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:48.813810    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:48.826522    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:48.826533    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:48.844180    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:48.844193    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:48.145876    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:48.146299    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:48.173791    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:48.173904    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:48.197486    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:48.197568    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:48.210173    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:48.210247    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:48.221569    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:48.221639    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:48.232802    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:48.232877    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:48.243716    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:48.243784    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:48.253938    6638 logs.go:276] 0 containers: []
	W0718 21:19:48.253949    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:48.254009    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:48.264273    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:48.264292    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:48.264298    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:48.278184    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:48.278194    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:48.289647    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:48.289658    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:48.301324    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:48.301335    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:48.335546    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:48.335558    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:48.349417    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:48.349429    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:48.361831    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:48.361843    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:48.373408    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:48.373419    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:48.410110    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:48.410125    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:48.421328    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:48.421338    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:48.435103    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:48.435113    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:48.452707    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:48.452718    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:48.475072    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:48.475081    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:48.487123    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:48.487134    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:48.491934    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:48.491947    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:48.530262    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:48.530271    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:48.548457    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:48.548466    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:51.066137    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:51.360076    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:56.068297    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:56.068470    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:56.079639    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:56.079716    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:56.090186    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:56.090255    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:56.100587    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:56.100657    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:56.110670    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:56.110752    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:56.121073    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:56.121141    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:56.131598    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:56.131660    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:56.142467    6638 logs.go:276] 0 containers: []
	W0718 21:19:56.142483    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:56.142543    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:56.153277    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:56.153295    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:56.153300    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:56.167021    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:56.167030    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:56.181316    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:56.181329    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:56.217575    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:56.217581    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:56.229103    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:56.229112    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:56.243257    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:56.243268    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:56.255056    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:56.255068    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:56.280196    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:56.280208    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:56.297729    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:56.297741    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:56.360287    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:56.360383    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:56.371782    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:19:56.371855    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:56.387949    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:19:56.388033    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:56.399753    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:19:56.399828    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:56.411448    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:19:56.411522    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:56.422168    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:19:56.422237    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:56.432389    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:19:56.432462    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:56.443276    6499 logs.go:276] 0 containers: []
	W0718 21:19:56.443288    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:56.443351    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:56.455420    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:19:56.455436    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:56.455441    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:56.460512    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:19:56.460523    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:19:56.476279    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:19:56.476289    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:19:56.501370    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:19:56.501380    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:56.512370    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:19:56.512381    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:19:56.524318    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:19:56.524328    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:19:56.538770    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:19:56.538783    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:19:56.550375    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:19:56.550387    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:19:56.562544    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:56.562554    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:56.595471    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:56.595479    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:56.630812    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:19:56.630827    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:19:56.645657    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:19:56.645667    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:19:56.660473    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:56.660486    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:59.185625    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:56.334367    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:56.334378    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:56.346264    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:56.346275    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:56.360724    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:56.360732    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:56.378404    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:56.378416    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:56.392286    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:56.392297    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:56.396844    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:56.396852    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:56.436744    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:56.436756    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:56.452946    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:56.452957    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:58.967062    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:04.187758    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:04.187847    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:04.199051    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:04.199114    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:04.209838    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:04.209910    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:04.220882    6499 logs.go:276] 2 containers: [8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:04.220963    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:04.232872    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:04.232942    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:04.244292    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:04.244364    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:04.260452    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:04.260527    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:04.272088    6499 logs.go:276] 0 containers: []
	W0718 21:20:04.272101    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:04.272164    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:04.283307    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:04.283322    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:04.283328    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:04.297282    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:04.297296    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:04.310656    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:04.310670    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:04.323063    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:04.323078    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:04.338862    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:04.338876    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:04.351378    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:04.351388    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:04.364162    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:04.364174    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:04.388951    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:04.388964    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:04.424765    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:04.424778    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:04.429624    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:04.429632    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:04.467510    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:04.467523    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:04.481549    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:04.481560    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:04.495580    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:04.495591    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:03.967811    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:03.968010    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:03.989777    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:03.989864    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:04.004348    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:04.004426    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:04.018332    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:04.018396    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:04.029435    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:04.029510    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:04.043944    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:04.044013    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:04.055707    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:04.055782    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:04.065817    6638 logs.go:276] 0 containers: []
	W0718 21:20:04.065829    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:04.065887    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:04.076257    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:04.076277    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:04.076282    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:04.080516    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:04.080525    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:04.094219    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:04.094228    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:04.111187    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:04.111198    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:04.122877    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:04.122887    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:04.138088    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:04.138097    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:04.149835    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:04.149844    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:04.164565    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:04.164575    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:04.203103    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:04.203123    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:04.218463    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:04.218477    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:04.236930    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:04.236939    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:04.262436    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:04.262448    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:04.300972    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:04.300985    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:04.313158    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:04.313169    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:04.329452    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:04.329469    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:04.342407    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:04.342420    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:04.354895    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:04.354907    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:07.014521    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:06.896044    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:12.014682    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:12.014758    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:12.025924    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:12.025999    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:12.038273    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:12.038341    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:12.049836    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:12.049907    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:12.063356    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:12.063433    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:12.075131    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:12.075202    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:12.086489    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:12.086555    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:12.100161    6499 logs.go:276] 0 containers: []
	W0718 21:20:12.100174    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:12.100239    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:12.111672    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:12.111690    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:12.111696    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:12.124088    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:12.124100    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:12.139694    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:12.139708    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:12.167244    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:12.167269    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:12.181293    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:12.181307    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:12.186125    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:12.186137    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:12.226609    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:12.226620    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:12.239047    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:12.239059    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:12.257467    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:12.257479    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:12.270127    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:12.270140    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:12.284504    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:12.284512    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:12.297074    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:12.297082    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:12.309644    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:12.309656    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:12.322156    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:12.322166    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:12.355813    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:12.355822    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:14.874950    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:11.898440    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:11.898622    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:11.910284    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:11.910361    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:11.921224    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:11.921296    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:11.931778    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:11.931843    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:11.942531    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:11.942602    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:11.953540    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:11.953606    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:11.963701    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:11.963773    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:11.973748    6638 logs.go:276] 0 containers: []
	W0718 21:20:11.973759    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:11.973817    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:11.984792    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:11.984810    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:11.984817    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:12.000143    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:12.000157    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:12.014387    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:12.014400    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:12.053816    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:12.053829    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:12.068509    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:12.068520    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:12.108007    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:12.108019    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:12.127175    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:12.127184    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:12.147667    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:12.147680    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:12.162719    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:12.162736    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:12.175724    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:12.175741    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:12.191422    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:12.191436    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:12.203765    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:12.203778    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:12.242672    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:12.242683    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:12.258396    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:12.258404    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:12.282514    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:12.282526    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:12.296134    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:12.296146    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:12.300507    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:12.300520    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:14.815520    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:19.877015    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:19.877089    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:19.888636    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:19.888706    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:19.900901    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:19.900974    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:19.913235    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:19.913310    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:19.924801    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:19.924873    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:19.935948    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:19.936017    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:19.951950    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:19.952023    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:19.963747    6499 logs.go:276] 0 containers: []
	W0718 21:20:19.963758    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:19.963820    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:19.975414    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:19.975432    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:19.975437    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:19.988610    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:19.988619    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:20.004814    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:20.004827    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:20.023810    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:20.023826    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:20.061084    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:20.061098    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:20.104995    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:20.105008    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:20.128497    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:20.128509    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:20.158030    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:20.158041    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:20.172507    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:20.172518    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:20.197890    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:20.197908    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:20.215482    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:20.215493    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:20.231466    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:20.231478    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:20.244789    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:20.244805    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:20.257590    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:20.257603    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:20.262149    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:20.262158    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:19.817763    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:19.817997    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:19.841206    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:19.841338    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:19.857720    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:19.857795    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:19.870478    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:19.870554    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:19.882128    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:19.882203    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:19.893601    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:19.893667    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:19.905680    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:19.905760    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:19.916945    6638 logs.go:276] 0 containers: []
	W0718 21:20:19.916974    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:19.917035    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:19.933170    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:19.933187    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:19.933193    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:19.970840    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:19.970852    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:20.010210    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:20.010228    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:20.025861    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:20.025869    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:20.038234    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:20.038245    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:20.079212    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:20.079229    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:20.095166    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:20.095180    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:20.107943    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:20.107954    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:20.135976    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:20.135988    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:20.148969    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:20.148984    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:20.168703    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:20.168717    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:20.187209    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:20.187220    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:20.191480    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:20.191486    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:20.206433    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:20.206450    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:20.223120    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:20.223134    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:20.235536    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:20.235547    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:20.259100    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:20.259113    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:22.782538    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:22.772776    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:27.784655    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:27.784914    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:27.811401    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:27.811500    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:27.829242    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:27.829319    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:27.843292    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:27.843364    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:27.856070    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:27.856101    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:27.869291    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:27.869339    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:27.880502    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:27.880548    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:27.895482    6499 logs.go:276] 0 containers: []
	W0718 21:20:27.895494    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:27.895552    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:27.907313    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:27.907331    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:27.907337    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:27.943363    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:27.943379    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:27.948705    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:27.948713    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:27.961127    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:27.961137    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:27.987582    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:27.987599    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:28.003221    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:28.003230    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:28.018326    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:28.018338    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:28.031223    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:28.031234    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:28.043529    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:28.043540    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:28.084352    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:28.084363    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:28.096692    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:28.096703    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:28.115068    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:28.115083    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:28.133634    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:28.133650    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:28.146023    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:28.146035    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:28.160118    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:28.160128    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:30.677649    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:27.775049    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:27.775386    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:27.812722    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:27.812774    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:27.830107    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:27.830157    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:27.843840    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:27.843880    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:27.855806    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:27.855878    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:27.867543    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:27.867613    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:27.879070    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:27.879149    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:27.890222    6638 logs.go:276] 0 containers: []
	W0718 21:20:27.890235    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:27.890290    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:27.902109    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:27.902170    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:27.902181    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:27.917270    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:27.917281    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:27.959224    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:27.959245    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:27.973678    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:27.973688    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:27.986071    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:27.986083    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:28.002024    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:28.002036    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:28.020245    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:28.020254    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:28.059784    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:28.059803    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:28.065049    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:28.065060    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:28.080396    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:28.080407    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:28.093197    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:28.093209    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:28.110303    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:28.110317    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:28.123494    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:28.123506    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:28.135933    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:28.135943    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:28.160852    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:28.160860    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:28.176558    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:28.176568    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:28.188897    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:28.188914    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:30.726779    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:35.679059    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:35.679499    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:35.717580    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:35.717755    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:35.738929    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:35.739027    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:35.755980    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:35.756060    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:35.769979    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:35.770052    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:35.781787    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:35.781861    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:35.793102    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:35.793178    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:35.804546    6499 logs.go:276] 0 containers: []
	W0718 21:20:35.804557    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:35.804617    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:35.816201    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:35.816220    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:35.816226    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:35.834851    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:35.834860    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:35.850434    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:35.850442    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:35.855310    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:35.855326    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:35.870599    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:35.870611    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:35.885844    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:35.885859    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:35.898896    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:35.898907    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:35.914331    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:35.914339    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:35.928170    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:35.928182    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:35.940817    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:35.940829    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:35.953516    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:35.953526    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:35.729161    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:35.729321    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:35.747988    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:35.748102    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:35.762433    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:35.762508    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:35.774974    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:35.775043    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:35.786546    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:35.786619    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:35.800920    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:35.800996    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:35.812189    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:35.812263    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:35.823139    6638 logs.go:276] 0 containers: []
	W0718 21:20:35.823151    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:35.823214    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:35.834527    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:35.834546    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:35.834552    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:35.850172    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:35.850184    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:35.862330    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:35.862342    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:35.878296    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:35.878310    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:35.890565    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:35.890578    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:35.913931    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:35.913949    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:35.918591    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:35.918603    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:35.967817    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:35.967830    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:35.982070    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:35.982084    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:35.997295    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:35.997308    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:36.009398    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:36.009411    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:36.027252    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:36.027267    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:36.040344    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:36.040358    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:36.052798    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:36.052811    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:36.095409    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:36.095421    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:36.130803    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:36.130816    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:36.149168    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:36.149180    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:35.978015    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:35.978034    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:36.016308    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:36.016320    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:36.055447    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:36.055456    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:36.069035    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:36.069048    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:38.584636    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:38.663230    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:43.586902    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:43.587038    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:43.602443    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:43.602520    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:43.613161    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:43.613238    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:43.623434    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:43.623507    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:43.638458    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:43.638523    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:43.648838    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:43.648909    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:43.659117    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:43.659182    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:43.670625    6499 logs.go:276] 0 containers: []
	W0718 21:20:43.670636    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:43.670694    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:43.682063    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:43.682082    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:43.682088    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:43.694799    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:43.694811    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:43.713359    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:43.713368    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:43.755757    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:43.755782    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:43.769533    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:43.769543    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:43.795891    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:43.795904    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:43.808986    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:43.808997    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:43.844215    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:43.844225    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:43.860068    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:43.860086    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:43.875198    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:43.875212    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:43.880186    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:43.880195    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:43.893423    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:43.893436    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:43.906209    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:43.906221    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:43.919070    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:43.919084    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:43.934684    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:43.934695    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:43.665350    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:43.665420    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:43.677593    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:43.677670    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:43.689759    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:43.689827    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:43.700906    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:43.700983    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:43.712032    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:43.712103    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:43.723624    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:43.723696    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:43.734577    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:43.734657    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:43.745529    6638 logs.go:276] 0 containers: []
	W0718 21:20:43.745542    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:43.745607    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:43.757259    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:43.757275    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:43.757279    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:43.798104    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:43.798121    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:43.815969    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:43.815981    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:43.831483    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:43.831496    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:43.843782    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:43.843794    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:43.848634    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:43.848645    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:43.863206    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:43.863215    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:43.878415    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:43.878423    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:43.890963    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:43.890976    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:43.906674    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:43.906686    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:43.925052    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:43.925067    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:43.944559    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:43.944574    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:43.984048    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:43.984059    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:44.002115    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:44.002128    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:44.025287    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:44.025297    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:44.039285    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:44.039295    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:44.055198    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:44.055210    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:46.449873    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:46.591879    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:51.451938    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:51.452081    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:51.465073    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:51.465153    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:51.476142    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:51.476220    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:51.487170    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:51.487241    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:51.497633    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:51.497708    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:51.508142    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:51.508217    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:51.518612    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:51.518685    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:51.528476    6499 logs.go:276] 0 containers: []
	W0718 21:20:51.528486    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:51.528542    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:51.539565    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:51.539581    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:51.539586    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:51.551124    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:51.551136    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:51.566055    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:51.566066    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:51.599565    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:51.599579    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:51.614433    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:51.614445    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:20:51.627587    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:51.627601    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:51.647069    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:51.647080    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:51.674119    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:51.674132    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:51.713124    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:51.713136    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:51.726241    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:51.726254    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:51.742988    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:51.742999    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:51.758924    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:51.758936    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:51.771778    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:51.771791    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:51.776740    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:51.776749    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:51.790167    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:51.790179    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:54.306820    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:51.593923    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:51.594085    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:51.605174    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:51.605252    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:51.616627    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:51.616703    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:51.627647    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:51.627710    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:51.638802    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:51.638880    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:51.649823    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:51.649894    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:51.666186    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:51.666254    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:51.682502    6638 logs.go:276] 0 containers: []
	W0718 21:20:51.682516    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:51.682580    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:51.694188    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:51.694212    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:51.694217    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:51.731723    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:51.731736    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:51.747375    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:51.747388    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:51.760157    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:51.760165    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:51.775859    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:51.775871    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:51.780863    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:51.780875    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:51.804003    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:51.804019    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:51.817409    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:51.817419    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:51.829551    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:51.829563    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:51.867379    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:51.867391    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:51.885999    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:51.886009    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:51.900751    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:51.900762    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:51.916003    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:51.916012    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:51.927763    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:51.927774    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:51.941762    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:51.941776    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:51.953137    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:51.953149    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:51.975837    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:51.975846    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:54.516264    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:59.518416    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:59.518449    6638 kubeadm.go:597] duration metric: took 4m4.207913125s to restartPrimaryControlPlane
	W0718 21:20:59.518482    6638 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0718 21:20:59.518495    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0718 21:21:00.558455    6638 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.039978s)
	I0718 21:21:00.558511    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 21:21:00.563416    6638 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 21:21:00.566207    6638 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 21:21:00.568954    6638 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 21:21:00.568959    6638 kubeadm.go:157] found existing configuration files:
	
	I0718 21:21:00.569000    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/admin.conf
	I0718 21:21:00.571454    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 21:21:00.571479    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 21:21:00.574243    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/kubelet.conf
	I0718 21:21:00.577071    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 21:21:00.577091    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 21:21:00.579739    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/controller-manager.conf
	I0718 21:21:00.582781    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 21:21:00.582808    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 21:21:00.586135    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/scheduler.conf
	I0718 21:21:00.588991    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 21:21:00.589015    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 21:21:00.591715    6638 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0718 21:21:00.609028    6638 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0718 21:21:00.609059    6638 kubeadm.go:310] [preflight] Running pre-flight checks
	I0718 21:21:00.660759    6638 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 21:21:00.660809    6638 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 21:21:00.660865    6638 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 21:21:00.714491    6638 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 21:20:59.309462    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:59.309822    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:59.347475    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:20:59.347610    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:59.364222    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:20:59.364311    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:59.377542    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:20:59.377611    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:59.389079    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:20:59.389152    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:59.400523    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:20:59.400590    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:59.411348    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:20:59.411414    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:59.422141    6499 logs.go:276] 0 containers: []
	W0718 21:20:59.422160    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:59.422224    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:59.432906    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:20:59.432925    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:20:59.432930    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:20:59.453667    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:20:59.453678    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:20:59.472006    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:59.472017    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:59.507232    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:20:59.507242    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:20:59.522418    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:20:59.522429    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:20:59.534947    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:20:59.534959    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:20:59.551628    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:20:59.551640    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:20:59.564362    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:20:59.564373    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:59.577295    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:59.577311    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:59.616442    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:20:59.616456    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:20:59.635820    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:20:59.635830    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:20:59.648033    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:20:59.648044    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:20:59.660217    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:59.660229    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:59.684090    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:59.684106    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:59.688966    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:20:59.688976    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:00.722838    6638 out.go:204]   - Generating certificates and keys ...
	I0718 21:21:00.722870    6638 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0718 21:21:00.722909    6638 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0718 21:21:00.722957    6638 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0718 21:21:00.722993    6638 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0718 21:21:00.723034    6638 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0718 21:21:00.723065    6638 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0718 21:21:00.723098    6638 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0718 21:21:00.723126    6638 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0718 21:21:00.723162    6638 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0718 21:21:00.723202    6638 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0718 21:21:00.723224    6638 kubeadm.go:310] [certs] Using the existing "sa" key
	I0718 21:21:00.723252    6638 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 21:21:00.835794    6638 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 21:21:00.921562    6638 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 21:21:00.969776    6638 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 21:21:01.070162    6638 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 21:21:01.097794    6638 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 21:21:01.098257    6638 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 21:21:01.098288    6638 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0718 21:21:01.171646    6638 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 21:21:01.175816    6638 out.go:204]   - Booting up control plane ...
	I0718 21:21:01.175865    6638 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 21:21:01.175907    6638 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 21:21:01.175942    6638 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 21:21:01.175980    6638 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 21:21:01.176099    6638 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0718 21:21:02.204019    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:05.676346    6638 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504437 seconds
	I0718 21:21:05.676437    6638 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 21:21:05.681924    6638 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 21:21:06.192729    6638 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0718 21:21:06.192925    6638 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-465000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 21:21:06.696520    6638 kubeadm.go:310] [bootstrap-token] Using token: 7z5uzo.dwmxbixp3b0364hf
	I0718 21:21:06.703019    6638 out.go:204]   - Configuring RBAC rules ...
	I0718 21:21:06.703090    6638 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 21:21:06.703141    6638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 21:21:06.707987    6638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 21:21:06.708874    6638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 21:21:06.709782    6638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 21:21:06.710789    6638 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 21:21:06.713892    6638 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 21:21:06.879047    6638 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0718 21:21:07.099953    6638 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0718 21:21:07.100328    6638 kubeadm.go:310] 
	I0718 21:21:07.100358    6638 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0718 21:21:07.100361    6638 kubeadm.go:310] 
	I0718 21:21:07.100396    6638 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0718 21:21:07.100419    6638 kubeadm.go:310] 
	I0718 21:21:07.100435    6638 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0718 21:21:07.100478    6638 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 21:21:07.100506    6638 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 21:21:07.100511    6638 kubeadm.go:310] 
	I0718 21:21:07.100539    6638 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0718 21:21:07.100542    6638 kubeadm.go:310] 
	I0718 21:21:07.100567    6638 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 21:21:07.100570    6638 kubeadm.go:310] 
	I0718 21:21:07.100609    6638 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0718 21:21:07.100651    6638 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 21:21:07.100697    6638 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 21:21:07.100708    6638 kubeadm.go:310] 
	I0718 21:21:07.100757    6638 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0718 21:21:07.100794    6638 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0718 21:21:07.100797    6638 kubeadm.go:310] 
	I0718 21:21:07.100838    6638 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7z5uzo.dwmxbixp3b0364hf \
	I0718 21:21:07.100899    6638 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc \
	I0718 21:21:07.100914    6638 kubeadm.go:310] 	--control-plane 
	I0718 21:21:07.100917    6638 kubeadm.go:310] 
	I0718 21:21:07.100959    6638 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0718 21:21:07.100963    6638 kubeadm.go:310] 
	I0718 21:21:07.101009    6638 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7z5uzo.dwmxbixp3b0364hf \
	I0718 21:21:07.101054    6638 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc 
	I0718 21:21:07.101184    6638 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 21:21:07.101252    6638 cni.go:84] Creating CNI manager for ""
	I0718 21:21:07.101262    6638 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:21:07.105508    6638 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0718 21:21:07.113356    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0718 21:21:07.116193    6638 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0718 21:21:07.120926    6638 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 21:21:07.121003    6638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 21:21:07.121009    6638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-465000 minikube.k8s.io/updated_at=2024_07_18T21_21_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=stopped-upgrade-465000 minikube.k8s.io/primary=true
	I0718 21:21:07.160980    6638 ops.go:34] apiserver oom_adj: -16
	I0718 21:21:07.160984    6638 kubeadm.go:1113] duration metric: took 40.0165ms to wait for elevateKubeSystemPrivileges
	I0718 21:21:07.160994    6638 kubeadm.go:394] duration metric: took 4m11.867085417s to StartCluster
	I0718 21:21:07.161006    6638 settings.go:142] acquiring lock: {Name:mk9577e2a46ebc5e017130011eb528f9fea1ed10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:21:07.161099    6638 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:21:07.161521    6638 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:21:07.161896    6638 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:21:07.161900    6638 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 21:21:07.161933    6638 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-465000"
	I0718 21:21:07.161949    6638 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-465000"
	W0718 21:21:07.161954    6638 addons.go:243] addon storage-provisioner should already be in state true
	I0718 21:21:07.161966    6638 host.go:66] Checking if "stopped-upgrade-465000" exists ...
	I0718 21:21:07.161967    6638 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-465000"
	I0718 21:21:07.161983    6638 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-465000"
	I0718 21:21:07.162020    6638 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:21:07.165577    6638 out.go:177] * Verifying Kubernetes components...
	I0718 21:21:07.166262    6638 kapi.go:59] client config for stopped-upgrade-465000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101c0f790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 21:21:07.169709    6638 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-465000"
	W0718 21:21:07.169714    6638 addons.go:243] addon default-storageclass should already be in state true
	I0718 21:21:07.169721    6638 host.go:66] Checking if "stopped-upgrade-465000" exists ...
	I0718 21:21:07.170245    6638 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0718 21:21:07.170250    6638 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0718 21:21:07.170256    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	I0718 21:21:07.173483    6638 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:21:07.206118    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:07.206197    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:07.217975    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:07.218040    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:07.229632    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:07.229704    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:07.245758    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:07.245833    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:07.256484    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:07.256549    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:07.268183    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:07.268248    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:07.279603    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:07.279671    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:07.291100    6499 logs.go:276] 0 containers: []
	W0718 21:21:07.291113    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:07.291173    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:07.302035    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:07.302055    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:07.302060    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:07.315520    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:07.315533    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:07.327998    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:07.328011    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:07.355051    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:07.355070    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:07.392795    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:07.392812    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:07.409325    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:07.409337    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:07.422345    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:07.422358    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:07.435233    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:07.435244    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:07.452675    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:07.452687    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:07.477647    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:07.477657    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:07.496441    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:07.496453    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:07.509535    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:07.509548    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:07.514749    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:07.514760    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:07.554088    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:07.554107    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:07.567574    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:07.567586    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:10.085284    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:07.177355    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:21:07.181567    6638 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 21:21:07.181574    6638 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0718 21:21:07.181580    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	I0718 21:21:07.258891    6638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 21:21:07.265115    6638 api_server.go:52] waiting for apiserver process to appear ...
	I0718 21:21:07.265178    6638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:21:07.270538    6638 api_server.go:72] duration metric: took 108.632416ms to wait for apiserver process to appear ...
	I0718 21:21:07.270547    6638 api_server.go:88] waiting for apiserver healthz status ...
	I0718 21:21:07.270556    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:07.275737    6638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 21:21:07.328247    6638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0718 21:21:15.087355    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:15.087569    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:15.103105    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:15.103197    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:15.115412    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:15.115486    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:15.125845    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:15.125920    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:15.136137    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:15.136208    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:15.146724    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:15.146794    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:15.157857    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:15.157923    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:15.168136    6499 logs.go:276] 0 containers: []
	W0718 21:21:15.168146    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:15.168206    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:15.178560    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:15.178577    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:15.178582    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:15.213580    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:15.213589    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:15.250687    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:15.250697    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:15.262550    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:15.262561    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:15.275223    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:15.275235    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:15.280163    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:15.280170    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:15.294664    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:15.294678    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:15.306719    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:15.306729    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:15.330529    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:15.330536    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:15.348557    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:15.348567    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:15.360921    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:15.360932    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:15.372951    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:15.372962    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:15.388651    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:15.388661    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:15.405274    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:15.405284    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:15.419966    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:15.419977    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:12.272523    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:12.272550    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:17.940712    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:17.272661    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:17.272693    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:22.942814    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:22.943052    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:22.969569    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:22.969672    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:22.985883    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:22.985959    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:23.000189    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:23.000263    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:23.011508    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:23.011569    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:23.022321    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:23.022387    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:23.034008    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:23.034079    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:23.044103    6499 logs.go:276] 0 containers: []
	W0718 21:21:23.044115    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:23.044170    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:23.055596    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:23.055612    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:23.055617    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:23.073399    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:23.073409    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:23.107917    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:23.107927    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:23.122160    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:23.122172    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:23.136014    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:23.136024    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:23.148233    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:23.148249    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:23.161046    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:23.161059    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:23.173336    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:23.173353    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:23.206792    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:23.206800    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:23.211511    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:23.211520    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:23.226396    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:23.226406    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:23.238427    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:23.238438    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:23.250898    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:23.250908    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:23.268624    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:23.268634    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:23.284842    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:23.284851    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:25.810318    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:22.272902    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:22.272956    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:30.812444    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:30.812609    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:30.824679    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:30.824750    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:30.835379    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:30.835451    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:30.846157    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:30.846225    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:30.861817    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:30.861885    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:30.872261    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:30.872333    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:30.882619    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:30.882688    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:30.892796    6499 logs.go:276] 0 containers: []
	W0718 21:21:30.892808    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:30.892867    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:30.903712    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:30.903730    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:30.903736    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:30.921123    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:30.921132    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:30.933050    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:30.933064    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:30.937710    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:30.937717    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:30.952448    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:30.952460    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:27.273436    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:27.273483    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:30.967220    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:30.967230    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:30.979140    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:30.979151    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:30.991012    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:30.991025    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:31.002666    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:31.002675    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:31.014649    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:31.014662    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:31.039981    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:31.039995    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:31.076613    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:31.076631    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:31.113236    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:31.113247    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:31.127417    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:31.127429    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:31.139201    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:31.139213    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:33.650964    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:32.274012    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:32.274044    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:37.274652    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:37.274679    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0718 21:21:37.674123    6638 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0718 21:21:37.677170    6638 out.go:177] * Enabled addons: storage-provisioner
	I0718 21:21:38.651292    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:38.651520    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:38.669017    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:38.669102    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:38.682068    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:38.682141    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:38.693693    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:38.693756    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:38.703932    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:38.703998    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:38.714604    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:38.714680    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:38.725048    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:38.725115    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:38.736491    6499 logs.go:276] 0 containers: []
	W0718 21:21:38.736503    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:38.736559    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:38.746660    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:38.746675    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:38.746680    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:38.761346    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:38.761360    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:38.773378    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:38.773391    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:38.785256    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:38.785268    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:38.789708    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:38.789714    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:38.804988    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:38.805002    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:38.816372    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:38.816383    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:38.849082    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:38.849093    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:38.860888    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:38.860898    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:38.884445    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:38.884458    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:38.895694    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:38.895706    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:38.930700    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:38.930715    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:38.947075    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:38.947085    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:38.958944    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:38.958955    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:38.971289    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:38.971304    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:37.687953    6638 addons.go:510] duration metric: took 30.5269365s for enable addons: enabled=[storage-provisioner]
	I0718 21:21:41.495510    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:42.275526    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:42.275572    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:46.497665    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:46.497940    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:21:46.522380    6499 logs.go:276] 1 containers: [fddb86a543b1]
	I0718 21:21:46.522510    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:21:46.541012    6499 logs.go:276] 1 containers: [2f12a9a97ab3]
	I0718 21:21:46.541101    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:21:46.556252    6499 logs.go:276] 4 containers: [5fee742f34a3 33cf13bfd332 8b9699bb4d89 8bbf8484fb13]
	I0718 21:21:46.556320    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:21:46.566625    6499 logs.go:276] 1 containers: [3efb64207353]
	I0718 21:21:46.566691    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:21:46.581608    6499 logs.go:276] 1 containers: [4074e09ba5d8]
	I0718 21:21:46.581672    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:21:46.592110    6499 logs.go:276] 1 containers: [937ec4202d19]
	I0718 21:21:46.592183    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:21:46.602537    6499 logs.go:276] 0 containers: []
	W0718 21:21:46.602549    6499 logs.go:278] No container was found matching "kindnet"
	I0718 21:21:46.602601    6499 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:21:46.613123    6499 logs.go:276] 1 containers: [ed70daa82d6e]
	I0718 21:21:46.613141    6499 logs.go:123] Gathering logs for kube-scheduler [3efb64207353] ...
	I0718 21:21:46.613146    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efb64207353"
	I0718 21:21:46.627457    6499 logs.go:123] Gathering logs for container status ...
	I0718 21:21:46.627470    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:21:46.639921    6499 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:21:46.639934    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:21:46.674439    6499 logs.go:123] Gathering logs for coredns [33cf13bfd332] ...
	I0718 21:21:46.674451    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33cf13bfd332"
	I0718 21:21:46.686099    6499 logs.go:123] Gathering logs for coredns [5fee742f34a3] ...
	I0718 21:21:46.686113    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fee742f34a3"
	I0718 21:21:46.698168    6499 logs.go:123] Gathering logs for kube-proxy [4074e09ba5d8] ...
	I0718 21:21:46.698180    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4074e09ba5d8"
	I0718 21:21:46.710126    6499 logs.go:123] Gathering logs for Docker ...
	I0718 21:21:46.710136    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:21:46.735140    6499 logs.go:123] Gathering logs for kubelet ...
	I0718 21:21:46.735151    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:21:46.769964    6499 logs.go:123] Gathering logs for etcd [2f12a9a97ab3] ...
	I0718 21:21:46.769974    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f12a9a97ab3"
	I0718 21:21:46.784026    6499 logs.go:123] Gathering logs for coredns [8bbf8484fb13] ...
	I0718 21:21:46.784037    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bbf8484fb13"
	I0718 21:21:46.795501    6499 logs.go:123] Gathering logs for storage-provisioner [ed70daa82d6e] ...
	I0718 21:21:46.795511    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed70daa82d6e"
	I0718 21:21:46.808568    6499 logs.go:123] Gathering logs for kube-apiserver [fddb86a543b1] ...
	I0718 21:21:46.808578    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fddb86a543b1"
	I0718 21:21:46.822699    6499 logs.go:123] Gathering logs for coredns [8b9699bb4d89] ...
	I0718 21:21:46.822710    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9699bb4d89"
	I0718 21:21:46.834812    6499 logs.go:123] Gathering logs for dmesg ...
	I0718 21:21:46.834824    6499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:21:46.839630    6499 logs.go:123] Gathering logs for kube-controller-manager [937ec4202d19] ...
	I0718 21:21:46.839636    6499 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 937ec4202d19"
	I0718 21:21:49.359381    6499 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:47.276815    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:47.276877    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:54.361493    6499 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:54.366042    6499 out.go:177] 
	W0718 21:21:54.369994    6499 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0718 21:21:54.370003    6499 out.go:239] * 
	W0718 21:21:54.370747    6499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:21:54.381969    6499 out.go:177] 
	I0718 21:21:52.278481    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:52.278537    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:57.280503    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:57.280526    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:22:02.282591    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:22:02.282617    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-07-19 04:12:39 UTC, ends at Fri 2024-07-19 04:22:10 UTC. --
	Jul 19 04:21:55 running-upgrade-511000 dockerd[3300]: time="2024-07-19T04:21:55.384749418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:21:55 running-upgrade-511000 dockerd[3300]: time="2024-07-19T04:21:55.384777460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:21:55 running-upgrade-511000 dockerd[3300]: time="2024-07-19T04:21:55.384783251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:21:55 running-upgrade-511000 dockerd[3300]: time="2024-07-19T04:21:55.384944333Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/47be420629ddb152a8b59cc86126efb7460fc9e28d406c04b278a671df0e8747 pid=18982 runtime=io.containerd.runc.v2
	Jul 19 04:21:55 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:21:55Z" level=error msg="ContainerStats resp: {0x40002e15c0 linux}"
	Jul 19 04:21:56 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:21:56Z" level=error msg="ContainerStats resp: {0x40009a7940 linux}"
	Jul 19 04:21:56 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:21:56Z" level=error msg="ContainerStats resp: {0x40005a6200 linux}"
	Jul 19 04:21:56 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:21:56Z" level=error msg="ContainerStats resp: {0x40005a6800 linux}"
	Jul 19 04:21:56 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:21:56Z" level=error msg="ContainerStats resp: {0x40001fe480 linux}"
	Jul 19 04:21:56 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:21:56Z" level=error msg="ContainerStats resp: {0x40005a7440 linux}"
	Jul 19 04:21:56 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:21:56Z" level=error msg="ContainerStats resp: {0x40005a7580 linux}"
	Jul 19 04:21:56 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:21:56Z" level=error msg="ContainerStats resp: {0x40001ff780 linux}"
	Jul 19 04:21:57 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:21:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 19 04:22:02 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:02Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 19 04:22:06 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:06Z" level=error msg="ContainerStats resp: {0x40009a6c00 linux}"
	Jul 19 04:22:06 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:06Z" level=error msg="ContainerStats resp: {0x40002e03c0 linux}"
	Jul 19 04:22:07 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:07Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 19 04:22:07 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:07Z" level=error msg="ContainerStats resp: {0x40005a6d80 linux}"
	Jul 19 04:22:08 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:08Z" level=error msg="ContainerStats resp: {0x40005a60c0 linux}"
	Jul 19 04:22:08 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:08Z" level=error msg="ContainerStats resp: {0x400009c740 linux}"
	Jul 19 04:22:08 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:08Z" level=error msg="ContainerStats resp: {0x40005a6740 linux}"
	Jul 19 04:22:08 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:08Z" level=error msg="ContainerStats resp: {0x400009df00 linux}"
	Jul 19 04:22:08 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:08Z" level=error msg="ContainerStats resp: {0x40008b42c0 linux}"
	Jul 19 04:22:08 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:08Z" level=error msg="ContainerStats resp: {0x40008b4780 linux}"
	Jul 19 04:22:08 running-upgrade-511000 cri-dockerd[3146]: time="2024-07-19T04:22:08Z" level=error msg="ContainerStats resp: {0x40008b4c00 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	47be420629ddb       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   56de7109b7207
	e4411bb2bd0db       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   5eab297bb0693
	5fee742f34a39       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   56de7109b7207
	33cf13bfd3320       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5eab297bb0693
	4074e09ba5d88       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   ce8ec0c141691
	ed70daa82d6ed       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   46e1085e0d655
	fddb86a543b1c       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   252b240cd1079
	3efb64207353a       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   38c66f3413ed8
	2f12a9a97ab35       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   99424d7b534cb
	937ec4202d19e       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   f1b7ba7951b76
	
	
	==> coredns [33cf13bfd332] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4261323302524455456.2690038983121176955. HINFO: read udp 10.244.0.2:59252->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4261323302524455456.2690038983121176955. HINFO: read udp 10.244.0.2:50915->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4261323302524455456.2690038983121176955. HINFO: read udp 10.244.0.2:60329->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4261323302524455456.2690038983121176955. HINFO: read udp 10.244.0.2:60401->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4261323302524455456.2690038983121176955. HINFO: read udp 10.244.0.2:37647->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4261323302524455456.2690038983121176955. HINFO: read udp 10.244.0.2:35211->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4261323302524455456.2690038983121176955. HINFO: read udp 10.244.0.2:50512->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4261323302524455456.2690038983121176955. HINFO: read udp 10.244.0.2:55978->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4261323302524455456.2690038983121176955. HINFO: read udp 10.244.0.2:49639->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4261323302524455456.2690038983121176955. HINFO: read udp 10.244.0.2:60232->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [47be420629dd] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2609055135287974280.7800950901618947168. HINFO: read udp 10.244.0.3:48330->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2609055135287974280.7800950901618947168. HINFO: read udp 10.244.0.3:42113->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2609055135287974280.7800950901618947168. HINFO: read udp 10.244.0.3:43698->10.0.2.3:53: i/o timeout
	
	
	==> coredns [5fee742f34a3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 722704310612107998.8821223485221341437. HINFO: read udp 10.244.0.3:39847->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 722704310612107998.8821223485221341437. HINFO: read udp 10.244.0.3:55290->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 722704310612107998.8821223485221341437. HINFO: read udp 10.244.0.3:51809->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 722704310612107998.8821223485221341437. HINFO: read udp 10.244.0.3:35227->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 722704310612107998.8821223485221341437. HINFO: read udp 10.244.0.3:60174->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 722704310612107998.8821223485221341437. HINFO: read udp 10.244.0.3:54686->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 722704310612107998.8821223485221341437. HINFO: read udp 10.244.0.3:43756->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 722704310612107998.8821223485221341437. HINFO: read udp 10.244.0.3:35687->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 722704310612107998.8821223485221341437. HINFO: read udp 10.244.0.3:40089->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 722704310612107998.8821223485221341437. HINFO: read udp 10.244.0.3:50363->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e4411bb2bd0d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4962063346791766074.1227996344286506307. HINFO: read udp 10.244.0.2:46394->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4962063346791766074.1227996344286506307. HINFO: read udp 10.244.0.2:39236->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4962063346791766074.1227996344286506307. HINFO: read udp 10.244.0.2:51936->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-511000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-511000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=running-upgrade-511000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_18T21_17_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:17:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-511000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:22:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:17:53 +0000   Fri, 19 Jul 2024 04:17:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:17:53 +0000   Fri, 19 Jul 2024 04:17:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:17:53 +0000   Fri, 19 Jul 2024 04:17:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:17:53 +0000   Fri, 19 Jul 2024 04:17:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-511000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c786c193fb246618827c20e5755a826
	  System UUID:                9c786c193fb246618827c20e5755a826
	  Boot ID:                    c81e31ee-8973-4b85-bef0-36e74ffe1074
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-lcmwr                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-zw8bn                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-511000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-511000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-511000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-4km7r                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-511000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-511000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-511000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-511000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-511000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-511000 event: Registered Node running-upgrade-511000 in Controller
	
	
	==> dmesg <==
	[  +1.607035] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.065346] systemd-fstab-generator[890]: Ignoring "noauto" for root device
	[  +0.082321] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +1.137922] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.072845] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.077826] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.961923] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[Jul19 04:13] systemd-fstab-generator[2018]: Ignoring "noauto" for root device
	[  +2.895669] systemd-fstab-generator[2296]: Ignoring "noauto" for root device
	[  +0.142180] systemd-fstab-generator[2329]: Ignoring "noauto" for root device
	[  +0.089641] systemd-fstab-generator[2340]: Ignoring "noauto" for root device
	[  +0.094915] systemd-fstab-generator[2353]: Ignoring "noauto" for root device
	[ +13.467423] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.211196] systemd-fstab-generator[3102]: Ignoring "noauto" for root device
	[  +0.072503] systemd-fstab-generator[3114]: Ignoring "noauto" for root device
	[  +0.081121] systemd-fstab-generator[3125]: Ignoring "noauto" for root device
	[  +0.099073] systemd-fstab-generator[3139]: Ignoring "noauto" for root device
	[  +2.318383] systemd-fstab-generator[3287]: Ignoring "noauto" for root device
	[  +2.765847] systemd-fstab-generator[3666]: Ignoring "noauto" for root device
	[  +1.162210] systemd-fstab-generator[3937]: Ignoring "noauto" for root device
	[Jul19 04:14] kauditd_printk_skb: 68 callbacks suppressed
	[Jul19 04:17] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.481844] systemd-fstab-generator[12007]: Ignoring "noauto" for root device
	[  +5.643667] systemd-fstab-generator[12612]: Ignoring "noauto" for root device
	[  +0.470457] systemd-fstab-generator[12742]: Ignoring "noauto" for root device
	
	
	==> etcd [2f12a9a97ab3] <==
	{"level":"info","ts":"2024-07-19T04:17:48.734Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T04:17:48.734Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-19T04:17:48.734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-19T04:17:48.734Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-19T04:17:48.734Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-19T04:17:48.734Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-19T04:17:48.734Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T04:17:49.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-19T04:17:49.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-19T04:17:49.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-19T04:17:49.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-19T04:17:49.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-19T04:17:49.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-19T04:17:49.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-19T04:17:49.591Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-511000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T04:17:49.591Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:17:49.591Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:17:49.591Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:17:49.591Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:17:49.591Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:17:49.591Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T04:17:49.591Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T04:17:49.591Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:17:49.591Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T04:17:49.592Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 04:22:10 up 9 min,  0 users,  load average: 0.30, 0.40, 0.23
	Linux running-upgrade-511000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fddb86a543b1] <==
	I0719 04:17:50.743332       1 controller.go:611] quota admission added evaluator for: namespaces
	I0719 04:17:50.803210       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 04:17:50.803426       1 cache.go:39] Caches are synced for autoregister controller
	I0719 04:17:50.803562       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0719 04:17:50.803954       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 04:17:50.828005       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0719 04:17:50.837707       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0719 04:17:51.548091       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0719 04:17:51.727706       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0719 04:17:51.732714       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0719 04:17:51.732869       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 04:17:51.898150       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 04:17:51.908843       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 04:17:51.949156       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0719 04:17:51.951057       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0719 04:17:51.951402       1 controller.go:611] quota admission added evaluator for: endpoints
	I0719 04:17:51.952732       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 04:17:52.826203       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0719 04:17:53.355805       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0719 04:17:53.359066       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0719 04:17:53.366748       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0719 04:17:53.415557       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 04:18:07.096770       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0719 04:18:07.150831       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0719 04:18:07.615277       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [937ec4202d19] <==
	I0719 04:18:06.332140       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0719 04:18:06.332262       1 event.go:294] "Event occurred" object="running-upgrade-511000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-511000 event: Registered Node running-upgrade-511000 in Controller"
	I0719 04:18:06.339668       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0719 04:18:06.345059       1 shared_informer.go:262] Caches are synced for attach detach
	I0719 04:18:06.372859       1 shared_informer.go:262] Caches are synced for node
	I0719 04:18:06.372913       1 range_allocator.go:173] Starting range CIDR allocator
	I0719 04:18:06.372926       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0719 04:18:06.372935       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0719 04:18:06.375586       1 range_allocator.go:374] Set node running-upgrade-511000 PodCIDR to [10.244.0.0/24]
	I0719 04:18:06.377854       1 shared_informer.go:262] Caches are synced for GC
	I0719 04:18:06.384043       1 shared_informer.go:262] Caches are synced for TTL
	I0719 04:18:06.395410       1 shared_informer.go:262] Caches are synced for persistent volume
	I0719 04:18:06.395577       1 shared_informer.go:262] Caches are synced for daemon sets
	I0719 04:18:06.395613       1 shared_informer.go:262] Caches are synced for deployment
	I0719 04:18:06.444526       1 shared_informer.go:262] Caches are synced for disruption
	I0719 04:18:06.444538       1 disruption.go:371] Sending events to api server.
	I0719 04:18:06.449651       1 shared_informer.go:262] Caches are synced for resource quota
	I0719 04:18:06.451158       1 shared_informer.go:262] Caches are synced for resource quota
	I0719 04:18:06.860901       1 shared_informer.go:262] Caches are synced for garbage collector
	I0719 04:18:06.896221       1 shared_informer.go:262] Caches are synced for garbage collector
	I0719 04:18:06.896233       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0719 04:18:07.099559       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4km7r"
	I0719 04:18:07.152403       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0719 04:18:07.247803       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-lcmwr"
	I0719 04:18:07.250950       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zw8bn"
	
	
	==> kube-proxy [4074e09ba5d8] <==
	I0719 04:18:07.599103       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0719 04:18:07.599127       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0719 04:18:07.599137       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0719 04:18:07.611697       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0719 04:18:07.611707       1 server_others.go:206] "Using iptables Proxier"
	I0719 04:18:07.611720       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0719 04:18:07.611975       1 server.go:661] "Version info" version="v1.24.1"
	I0719 04:18:07.611993       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:18:07.612301       1 config.go:317] "Starting service config controller"
	I0719 04:18:07.612322       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0719 04:18:07.612340       1 config.go:226] "Starting endpoint slice config controller"
	I0719 04:18:07.612352       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0719 04:18:07.612656       1 config.go:444] "Starting node config controller"
	I0719 04:18:07.612671       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0719 04:18:07.713309       1 shared_informer.go:262] Caches are synced for node config
	I0719 04:18:07.713329       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0719 04:18:07.713340       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [3efb64207353] <==
	W0719 04:17:50.741180       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:17:50.741211       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 04:17:50.741220       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 04:17:50.741254       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 04:17:50.741257       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 04:17:50.741287       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 04:17:50.741310       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:17:50.741314       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 04:17:50.741327       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 04:17:50.741334       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 04:17:50.741246       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 04:17:50.741340       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 04:17:50.741267       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 04:17:50.741344       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 04:17:50.741205       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 04:17:50.741348       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 04:17:50.741233       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 04:17:50.741363       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 04:17:51.731175       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:17:51.731284       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 04:17:51.746417       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 04:17:51.746499       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 04:17:51.841258       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:17:51.841346       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0719 04:17:52.139558       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-07-19 04:12:39 UTC, ends at Fri 2024-07-19 04:22:10 UTC. --
	Jul 19 04:17:55 running-upgrade-511000 kubelet[12618]: E0719 04:17:55.592862   12618 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-511000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-511000"
	Jul 19 04:18:06 running-upgrade-511000 kubelet[12618]: I0719 04:18:06.337085   12618 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 04:18:06 running-upgrade-511000 kubelet[12618]: I0719 04:18:06.401792   12618 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 19 04:18:06 running-upgrade-511000 kubelet[12618]: I0719 04:18:06.402224   12618 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 04:18:06 running-upgrade-511000 kubelet[12618]: I0719 04:18:06.502531   12618 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2c58239d-cc7d-4463-b427-18e52a85d0e1-tmp\") pod \"storage-provisioner\" (UID: \"2c58239d-cc7d-4463-b427-18e52a85d0e1\") " pod="kube-system/storage-provisioner"
	Jul 19 04:18:06 running-upgrade-511000 kubelet[12618]: I0719 04:18:06.502554   12618 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdtfc\" (UniqueName: \"kubernetes.io/projected/2c58239d-cc7d-4463-b427-18e52a85d0e1-kube-api-access-wdtfc\") pod \"storage-provisioner\" (UID: \"2c58239d-cc7d-4463-b427-18e52a85d0e1\") " pod="kube-system/storage-provisioner"
	Jul 19 04:18:06 running-upgrade-511000 kubelet[12618]: E0719 04:18:06.605967   12618 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 19 04:18:06 running-upgrade-511000 kubelet[12618]: E0719 04:18:06.605986   12618 projected.go:192] Error preparing data for projected volume kube-api-access-wdtfc for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 19 04:18:06 running-upgrade-511000 kubelet[12618]: E0719 04:18:06.606020   12618 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2c58239d-cc7d-4463-b427-18e52a85d0e1-kube-api-access-wdtfc podName:2c58239d-cc7d-4463-b427-18e52a85d0e1 nodeName:}" failed. No retries permitted until 2024-07-19 04:18:07.106007424 +0000 UTC m=+13.760871031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wdtfc" (UniqueName: "kubernetes.io/projected/2c58239d-cc7d-4463-b427-18e52a85d0e1-kube-api-access-wdtfc") pod "storage-provisioner" (UID: "2c58239d-cc7d-4463-b427-18e52a85d0e1") : configmap "kube-root-ca.crt" not found
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: I0719 04:18:07.102086   12618 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: I0719 04:18:07.207926   12618 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7fd818d-41fb-4aa3-8645-47fe8dacd898-xtables-lock\") pod \"kube-proxy-4km7r\" (UID: \"c7fd818d-41fb-4aa3-8645-47fe8dacd898\") " pod="kube-system/kube-proxy-4km7r"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: I0719 04:18:07.207967   12618 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqcz4\" (UniqueName: \"kubernetes.io/projected/c7fd818d-41fb-4aa3-8645-47fe8dacd898-kube-api-access-wqcz4\") pod \"kube-proxy-4km7r\" (UID: \"c7fd818d-41fb-4aa3-8645-47fe8dacd898\") " pod="kube-system/kube-proxy-4km7r"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: I0719 04:18:07.207986   12618 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c7fd818d-41fb-4aa3-8645-47fe8dacd898-kube-proxy\") pod \"kube-proxy-4km7r\" (UID: \"c7fd818d-41fb-4aa3-8645-47fe8dacd898\") " pod="kube-system/kube-proxy-4km7r"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: I0719 04:18:07.207997   12618 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7fd818d-41fb-4aa3-8645-47fe8dacd898-lib-modules\") pod \"kube-proxy-4km7r\" (UID: \"c7fd818d-41fb-4aa3-8645-47fe8dacd898\") " pod="kube-system/kube-proxy-4km7r"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: I0719 04:18:07.250180   12618 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: I0719 04:18:07.260462   12618 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: I0719 04:18:07.408795   12618 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d99024fe-6d5f-4cbc-8590-165650128959-config-volume\") pod \"coredns-6d4b75cb6d-zw8bn\" (UID: \"d99024fe-6d5f-4cbc-8590-165650128959\") " pod="kube-system/coredns-6d4b75cb6d-zw8bn"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: I0719 04:18:07.408816   12618 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0a92d74-a0cd-4404-b6ba-10fe3445718f-config-volume\") pod \"coredns-6d4b75cb6d-lcmwr\" (UID: \"c0a92d74-a0cd-4404-b6ba-10fe3445718f\") " pod="kube-system/coredns-6d4b75cb6d-lcmwr"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: I0719 04:18:07.408828   12618 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzhhq\" (UniqueName: \"kubernetes.io/projected/d99024fe-6d5f-4cbc-8590-165650128959-kube-api-access-qzhhq\") pod \"coredns-6d4b75cb6d-zw8bn\" (UID: \"d99024fe-6d5f-4cbc-8590-165650128959\") " pod="kube-system/coredns-6d4b75cb6d-zw8bn"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: I0719 04:18:07.408840   12618 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f5zf\" (UniqueName: \"kubernetes.io/projected/c0a92d74-a0cd-4404-b6ba-10fe3445718f-kube-api-access-4f5zf\") pod \"coredns-6d4b75cb6d-lcmwr\" (UID: \"c0a92d74-a0cd-4404-b6ba-10fe3445718f\") " pod="kube-system/coredns-6d4b75cb6d-lcmwr"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: E0719 04:18:07.521285   12618 remote_runtime.go:578] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 4074e09ba5d88381036bc47f2da9d17aaf24171b1058edb8763bbe5ddcb62d17" containerID="4074e09ba5d88381036bc47f2da9d17aaf24171b1058edb8763bbe5ddcb62d17"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: E0719 04:18:07.521306   12618 kuberuntime_manager.go:1069] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: 4074e09ba5d88381036bc47f2da9d17aaf24171b1058edb8763bbe5ddcb62d17" pod="kube-system/kube-proxy-4km7r"
	Jul 19 04:18:07 running-upgrade-511000 kubelet[12618]: E0719 04:18:07.521312   12618 generic.go:415] "PLEG: Write status" err="rpc error: code = Unknown desc = Error: No such container: 4074e09ba5d88381036bc47f2da9d17aaf24171b1058edb8763bbe5ddcb62d17" pod="kube-system/kube-proxy-4km7r"
	Jul 19 04:21:55 running-upgrade-511000 kubelet[12618]: I0719 04:21:55.715759   12618 scope.go:110] "RemoveContainer" containerID="8b9699bb4d89e1ac343380444bb56eace7d1f0461f66aa05a8cf562b326be02e"
	Jul 19 04:21:55 running-upgrade-511000 kubelet[12618]: I0719 04:21:55.745125   12618 scope.go:110] "RemoveContainer" containerID="8bbf8484fb13f506490ad5ad56ff3bb0ed36b2879e724ecd5384677c784817c6"
	
	
	==> storage-provisioner [ed70daa82d6e] <==
	I0719 04:18:07.414615       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 04:18:07.419837       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 04:18:07.419856       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 04:18:07.422743       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 04:18:07.422842       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-511000_fcd0ba68-e1fe-420b-8443-b10475ea192d!
	I0719 04:18:07.423813       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b2e41431-3832-4c63-81c0-6574043c561b", APIVersion:"v1", ResourceVersion:"357", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-511000_fcd0ba68-e1fe-420b-8443-b10475ea192d became leader
	I0719 04:18:07.524501       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-511000_fcd0ba68-e1fe-420b-8443-b10475ea192d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-511000 -n running-upgrade-511000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-511000 -n running-upgrade-511000: exit status 2 (15.676215667s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-511000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-511000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-511000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-511000: (1.191864167s)
--- FAIL: TestRunningBinaryUpgrade (613.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-797000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-797000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.7294125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-797000" primary control-plane node in "kubernetes-upgrade-797000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:15:14.761453    6570 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:15:14.761575    6570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:15:14.761577    6570 out.go:304] Setting ErrFile to fd 2...
	I0718 21:15:14.761580    6570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:15:14.761728    6570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:15:14.762837    6570 out.go:298] Setting JSON to false
	I0718 21:15:14.778989    6570 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4482,"bootTime":1721358032,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:15:14.779069    6570 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:15:14.783965    6570 out.go:177] * [kubernetes-upgrade-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:15:14.789792    6570 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:15:14.789830    6570 notify.go:220] Checking for updates...
	I0718 21:15:14.796930    6570 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:15:14.799837    6570 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:15:14.802859    6570 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:15:14.805887    6570 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:15:14.807104    6570 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:15:14.810131    6570 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:15:14.810198    6570 config.go:182] Loaded profile config "running-upgrade-511000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:15:14.810250    6570 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:15:14.814867    6570 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:15:14.819810    6570 start.go:297] selected driver: qemu2
	I0718 21:15:14.819816    6570 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:15:14.819824    6570 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:15:14.821890    6570 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:15:14.824860    6570 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:15:14.827980    6570 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 21:15:14.827996    6570 cni.go:84] Creating CNI manager for ""
	I0718 21:15:14.828002    6570 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0718 21:15:14.828035    6570 start.go:340] cluster config:
	{Name:kubernetes-upgrade-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:15:14.831424    6570 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:15:14.838866    6570 out.go:177] * Starting "kubernetes-upgrade-797000" primary control-plane node in "kubernetes-upgrade-797000" cluster
	I0718 21:15:14.842857    6570 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 21:15:14.842869    6570 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0718 21:15:14.842878    6570 cache.go:56] Caching tarball of preloaded images
	I0718 21:15:14.842928    6570 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:15:14.842932    6570 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0718 21:15:14.842978    6570 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/kubernetes-upgrade-797000/config.json ...
	I0718 21:15:14.842989    6570 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/kubernetes-upgrade-797000/config.json: {Name:mk1188d71824bf240d2710e67b018188bb8ee6a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:15:14.843262    6570 start.go:360] acquireMachinesLock for kubernetes-upgrade-797000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:15:14.843294    6570 start.go:364] duration metric: took 23.958µs to acquireMachinesLock for "kubernetes-upgrade-797000"
	I0718 21:15:14.843303    6570 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:15:14.843324    6570 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:15:14.851789    6570 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:15:14.866748    6570 start.go:159] libmachine.API.Create for "kubernetes-upgrade-797000" (driver="qemu2")
	I0718 21:15:14.866773    6570 client.go:168] LocalClient.Create starting
	I0718 21:15:14.866849    6570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:15:14.866881    6570 main.go:141] libmachine: Decoding PEM data...
	I0718 21:15:14.866891    6570 main.go:141] libmachine: Parsing certificate...
	I0718 21:15:14.866923    6570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:15:14.866946    6570 main.go:141] libmachine: Decoding PEM data...
	I0718 21:15:14.866954    6570 main.go:141] libmachine: Parsing certificate...
	I0718 21:15:14.867376    6570 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:15:14.996518    6570 main.go:141] libmachine: Creating SSH key...
	I0718 21:15:15.037626    6570 main.go:141] libmachine: Creating Disk image...
	I0718 21:15:15.037632    6570 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:15:15.037813    6570 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2
	I0718 21:15:15.047037    6570 main.go:141] libmachine: STDOUT: 
	I0718 21:15:15.047062    6570 main.go:141] libmachine: STDERR: 
	I0718 21:15:15.047108    6570 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2 +20000M
	I0718 21:15:15.055098    6570 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:15:15.055113    6570 main.go:141] libmachine: STDERR: 
	I0718 21:15:15.055131    6570 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2
	I0718 21:15:15.055139    6570 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:15:15.055153    6570 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:15:15.055188    6570 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:47:35:1a:46:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2
	I0718 21:15:15.056858    6570 main.go:141] libmachine: STDOUT: 
	I0718 21:15:15.056876    6570 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:15:15.056898    6570 client.go:171] duration metric: took 190.127625ms to LocalClient.Create
	I0718 21:15:17.059031    6570 start.go:128] duration metric: took 2.215748792s to createHost
	I0718 21:15:17.059090    6570 start.go:83] releasing machines lock for "kubernetes-upgrade-797000", held for 2.215854167s
	W0718 21:15:17.059148    6570 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:15:17.066633    6570 out.go:177] * Deleting "kubernetes-upgrade-797000" in qemu2 ...
	W0718 21:15:17.085841    6570 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:15:17.085862    6570 start.go:729] Will try again in 5 seconds ...
	I0718 21:15:22.087913    6570 start.go:360] acquireMachinesLock for kubernetes-upgrade-797000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:15:22.088473    6570 start.go:364] duration metric: took 469.458µs to acquireMachinesLock for "kubernetes-upgrade-797000"
	I0718 21:15:22.088603    6570 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:15:22.088842    6570 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:15:22.098545    6570 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:15:22.148681    6570 start.go:159] libmachine.API.Create for "kubernetes-upgrade-797000" (driver="qemu2")
	I0718 21:15:22.148735    6570 client.go:168] LocalClient.Create starting
	I0718 21:15:22.148871    6570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:15:22.148939    6570 main.go:141] libmachine: Decoding PEM data...
	I0718 21:15:22.148954    6570 main.go:141] libmachine: Parsing certificate...
	I0718 21:15:22.149026    6570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:15:22.149072    6570 main.go:141] libmachine: Decoding PEM data...
	I0718 21:15:22.149085    6570 main.go:141] libmachine: Parsing certificate...
	I0718 21:15:22.149668    6570 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:15:22.286791    6570 main.go:141] libmachine: Creating SSH key...
	I0718 21:15:22.397320    6570 main.go:141] libmachine: Creating Disk image...
	I0718 21:15:22.397327    6570 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:15:22.397511    6570 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2
	I0718 21:15:22.407188    6570 main.go:141] libmachine: STDOUT: 
	I0718 21:15:22.407204    6570 main.go:141] libmachine: STDERR: 
	I0718 21:15:22.407254    6570 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2 +20000M
	I0718 21:15:22.415039    6570 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:15:22.415054    6570 main.go:141] libmachine: STDERR: 
	I0718 21:15:22.415066    6570 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2
	I0718 21:15:22.415071    6570 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:15:22.415081    6570 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:15:22.415112    6570 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:e8:f5:54:46:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2
	I0718 21:15:22.416833    6570 main.go:141] libmachine: STDOUT: 
	I0718 21:15:22.416851    6570 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:15:22.416864    6570 client.go:171] duration metric: took 268.131083ms to LocalClient.Create
	I0718 21:15:24.419022    6570 start.go:128] duration metric: took 2.3302085s to createHost
	I0718 21:15:24.419105    6570 start.go:83] releasing machines lock for "kubernetes-upgrade-797000", held for 2.330675542s
	W0718 21:15:24.419523    6570 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:15:24.431163    6570 out.go:177] 
	W0718 21:15:24.435399    6570 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:15:24.435431    6570 out.go:239] * 
	* 
	W0718 21:15:24.438349    6570 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:15:24.450448    6570 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-797000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-797000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-797000: (2.165209333s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-797000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-797000 status --format={{.Host}}: exit status 7 (57.700916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-797000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-797000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.170210959s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-797000" primary control-plane node in "kubernetes-upgrade-797000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:15:26.718829    6600 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:15:26.718955    6600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:15:26.718962    6600 out.go:304] Setting ErrFile to fd 2...
	I0718 21:15:26.718965    6600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:15:26.719135    6600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:15:26.720198    6600 out.go:298] Setting JSON to false
	I0718 21:15:26.736489    6600 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4494,"bootTime":1721358032,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:15:26.736557    6600 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:15:26.741340    6600 out.go:177] * [kubernetes-upgrade-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:15:26.749341    6600 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:15:26.749396    6600 notify.go:220] Checking for updates...
	I0718 21:15:26.756209    6600 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:15:26.759279    6600 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:15:26.762330    6600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:15:26.765290    6600 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:15:26.768227    6600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:15:26.771554    6600 config.go:182] Loaded profile config "kubernetes-upgrade-797000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0718 21:15:26.771806    6600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:15:26.776175    6600 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 21:15:26.783265    6600 start.go:297] selected driver: qemu2
	I0718 21:15:26.783272    6600 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:15:26.783342    6600 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:15:26.785534    6600 cni.go:84] Creating CNI manager for ""
	I0718 21:15:26.785548    6600 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:15:26.785570    6600 start.go:340] cluster config:
	{Name:kubernetes-upgrade-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-797000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:15:26.788853    6600 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:15:26.796237    6600 out.go:177] * Starting "kubernetes-upgrade-797000" primary control-plane node in "kubernetes-upgrade-797000" cluster
	I0718 21:15:26.800309    6600 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 21:15:26.800330    6600 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0718 21:15:26.800348    6600 cache.go:56] Caching tarball of preloaded images
	I0718 21:15:26.800441    6600 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:15:26.800446    6600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0718 21:15:26.800499    6600 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/kubernetes-upgrade-797000/config.json ...
	I0718 21:15:26.800907    6600 start.go:360] acquireMachinesLock for kubernetes-upgrade-797000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:15:26.800934    6600 start.go:364] duration metric: took 20.167µs to acquireMachinesLock for "kubernetes-upgrade-797000"
	I0718 21:15:26.800942    6600 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:15:26.800947    6600 fix.go:54] fixHost starting: 
	I0718 21:15:26.801056    6600 fix.go:112] recreateIfNeeded on kubernetes-upgrade-797000: state=Stopped err=<nil>
	W0718 21:15:26.801064    6600 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:15:26.804265    6600 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-797000" ...
	I0718 21:15:26.812158    6600 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:15:26.812202    6600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:e8:f5:54:46:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2
	I0718 21:15:26.814137    6600 main.go:141] libmachine: STDOUT: 
	I0718 21:15:26.814152    6600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:15:26.814178    6600 fix.go:56] duration metric: took 13.230667ms for fixHost
	I0718 21:15:26.814182    6600 start.go:83] releasing machines lock for "kubernetes-upgrade-797000", held for 13.244292ms
	W0718 21:15:26.814189    6600 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:15:26.814231    6600 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:15:26.814236    6600 start.go:729] Will try again in 5 seconds ...
	I0718 21:15:31.816145    6600 start.go:360] acquireMachinesLock for kubernetes-upgrade-797000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:15:31.816263    6600 start.go:364] duration metric: took 100.625µs to acquireMachinesLock for "kubernetes-upgrade-797000"
	I0718 21:15:31.816283    6600 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:15:31.816286    6600 fix.go:54] fixHost starting: 
	I0718 21:15:31.816448    6600 fix.go:112] recreateIfNeeded on kubernetes-upgrade-797000: state=Stopped err=<nil>
	W0718 21:15:31.816453    6600 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:15:31.820705    6600 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-797000" ...
	I0718 21:15:31.827555    6600 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:15:31.827600    6600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:e8:f5:54:46:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubernetes-upgrade-797000/disk.qcow2
	I0718 21:15:31.829774    6600 main.go:141] libmachine: STDOUT: 
	I0718 21:15:31.829786    6600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:15:31.829805    6600 fix.go:56] duration metric: took 13.519583ms for fixHost
	I0718 21:15:31.829810    6600 start.go:83] releasing machines lock for "kubernetes-upgrade-797000", held for 13.541792ms
	W0718 21:15:31.829858    6600 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-797000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-797000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:15:31.837590    6600 out.go:177] 
	W0718 21:15:31.841588    6600 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:15:31.841596    6600 out.go:239] * 
	* 
	W0718 21:15:31.842033    6600 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:15:31.851527    6600 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-797000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-797000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-797000 version --output=json: exit status 1 (28.069167ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-797000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-18 21:15:31.888039 -0700 PDT m=+3041.982675417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-797000 -n kubernetes-upgrade-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-797000 -n kubernetes-upgrade-797000: exit status 7 (29.390416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-797000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-797000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-797000
--- FAIL: TestKubernetesUpgrade (17.25s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.77s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4127307652/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.77s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.37s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2077218321/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3250479244 start -p stopped-upgrade-465000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3250479244 start -p stopped-upgrade-465000 --memory=2200 --vm-driver=qemu2 : (41.250234709s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3250479244 -p stopped-upgrade-465000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3250479244 -p stopped-upgrade-465000 stop: (12.116818666s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-465000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0718 21:18:59.604390    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 21:20:12.950789    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/functional-020000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-465000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.128359959s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-465000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-465000" primary control-plane node in "stopped-upgrade-465000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-465000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:16:26.321568    6638 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:16:26.321744    6638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:16:26.321748    6638 out.go:304] Setting ErrFile to fd 2...
	I0718 21:16:26.321751    6638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:16:26.321911    6638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:16:26.323153    6638 out.go:298] Setting JSON to false
	I0718 21:16:26.343393    6638 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4554,"bootTime":1721358032,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:16:26.343465    6638 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:16:26.348386    6638 out.go:177] * [stopped-upgrade-465000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:16:26.356294    6638 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:16:26.356399    6638 notify.go:220] Checking for updates...
	I0718 21:16:26.362200    6638 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:16:26.365333    6638 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:16:26.368374    6638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:16:26.369602    6638 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:16:26.372396    6638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:16:26.375576    6638 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:16:26.379351    6638 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0718 21:16:26.382380    6638 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:16:26.386317    6638 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 21:16:26.393312    6638 start.go:297] selected driver: qemu2
	I0718 21:16:26.393318    6638 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50535 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0718 21:16:26.393366    6638 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:16:26.395853    6638 cni.go:84] Creating CNI manager for ""
	I0718 21:16:26.395868    6638 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:16:26.395889    6638 start.go:340] cluster config:
	{Name:stopped-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50535 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0718 21:16:26.395945    6638 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:16:26.402245    6638 out.go:177] * Starting "stopped-upgrade-465000" primary control-plane node in "stopped-upgrade-465000" cluster
	I0718 21:16:26.406357    6638 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0718 21:16:26.406374    6638 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0718 21:16:26.406385    6638 cache.go:56] Caching tarball of preloaded images
	I0718 21:16:26.406452    6638 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:16:26.406457    6638 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0718 21:16:26.406504    6638 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/config.json ...
	I0718 21:16:26.406900    6638 start.go:360] acquireMachinesLock for stopped-upgrade-465000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:16:26.406925    6638 start.go:364] duration metric: took 19.916µs to acquireMachinesLock for "stopped-upgrade-465000"
	I0718 21:16:26.406932    6638 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:16:26.406938    6638 fix.go:54] fixHost starting: 
	I0718 21:16:26.407042    6638 fix.go:112] recreateIfNeeded on stopped-upgrade-465000: state=Stopped err=<nil>
	W0718 21:16:26.407054    6638 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:16:26.412326    6638 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-465000" ...
	I0718 21:16:26.416360    6638 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:16:26.416422    6638 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50500-:22,hostfwd=tcp::50501-:2376,hostname=stopped-upgrade-465000 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/disk.qcow2
	I0718 21:16:26.459992    6638 main.go:141] libmachine: STDOUT: 
	I0718 21:16:26.460019    6638 main.go:141] libmachine: STDERR: 
	I0718 21:16:26.460025    6638 main.go:141] libmachine: Waiting for VM to start (ssh -p 50500 docker@127.0.0.1)...
	I0718 21:16:46.481802    6638 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/config.json ...
	I0718 21:16:46.482611    6638 machine.go:94] provisionDockerMachine start ...
	I0718 21:16:46.482820    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:46.483403    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:46.483421    6638 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 21:16:46.570633    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 21:16:46.570663    6638 buildroot.go:166] provisioning hostname "stopped-upgrade-465000"
	I0718 21:16:46.570795    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:46.571053    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:46.571064    6638 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-465000 && echo "stopped-upgrade-465000" | sudo tee /etc/hostname
	I0718 21:16:46.653889    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-465000
	
	I0718 21:16:46.653968    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:46.654170    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:46.654186    6638 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-465000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-465000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-465000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 21:16:46.724742    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 21:16:46.724758    6638 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1213/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1213/.minikube}
	I0718 21:16:46.724768    6638 buildroot.go:174] setting up certificates
	I0718 21:16:46.724773    6638 provision.go:84] configureAuth start
	I0718 21:16:46.724779    6638 provision.go:143] copyHostCerts
	I0718 21:16:46.724866    6638 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem, removing ...
	I0718 21:16:46.724878    6638 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem
	I0718 21:16:46.724995    6638 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.pem (1082 bytes)
	I0718 21:16:46.725205    6638 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem, removing ...
	I0718 21:16:46.725209    6638 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem
	I0718 21:16:46.725267    6638 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/cert.pem (1123 bytes)
	I0718 21:16:46.725379    6638 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem, removing ...
	I0718 21:16:46.725385    6638 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem
	I0718 21:16:46.725435    6638 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1213/.minikube/key.pem (1679 bytes)
	I0718 21:16:46.725534    6638 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-465000 san=[127.0.0.1 localhost minikube stopped-upgrade-465000]
	I0718 21:16:46.863855    6638 provision.go:177] copyRemoteCerts
	I0718 21:16:46.863908    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 21:16:46.863918    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	I0718 21:16:46.899949    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 21:16:46.907371    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0718 21:16:46.914145    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 21:16:46.920736    6638 provision.go:87] duration metric: took 195.957166ms to configureAuth
	I0718 21:16:46.920745    6638 buildroot.go:189] setting minikube options for container-runtime
	I0718 21:16:46.920862    6638 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:16:46.920904    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:46.921000    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:46.921005    6638 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 21:16:46.985477    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 21:16:46.985487    6638 buildroot.go:70] root file system type: tmpfs
	I0718 21:16:46.985536    6638 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 21:16:46.985583    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:46.985697    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:46.985730    6638 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 21:16:47.052924    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 21:16:47.052982    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:47.053097    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:47.053106    6638 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 21:16:47.415093    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 21:16:47.415105    6638 machine.go:97] duration metric: took 932.510709ms to provisionDockerMachine
	I0718 21:16:47.415112    6638 start.go:293] postStartSetup for "stopped-upgrade-465000" (driver="qemu2")
	I0718 21:16:47.415119    6638 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 21:16:47.415177    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 21:16:47.415188    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	I0718 21:16:47.451898    6638 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 21:16:47.453273    6638 info.go:137] Remote host: Buildroot 2021.02.12
	I0718 21:16:47.453281    6638 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/addons for local assets ...
	I0718 21:16:47.453359    6638 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1213/.minikube/files for local assets ...
	I0718 21:16:47.453452    6638 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem -> 17122.pem in /etc/ssl/certs
	I0718 21:16:47.453559    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 21:16:47.456345    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /etc/ssl/certs/17122.pem (1708 bytes)
	I0718 21:16:47.463700    6638 start.go:296] duration metric: took 48.583042ms for postStartSetup
	I0718 21:16:47.463713    6638 fix.go:56] duration metric: took 21.057385959s for fixHost
	I0718 21:16:47.463749    6638 main.go:141] libmachine: Using SSH client type: native
	I0718 21:16:47.463855    6638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10087aa10] 0x10087d270 <nil>  [] 0s} localhost 50500 <nil> <nil>}
	I0718 21:16:47.463862    6638 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0718 21:16:47.526485    6638 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362607.314287088
	
	I0718 21:16:47.526493    6638 fix.go:216] guest clock: 1721362607.314287088
	I0718 21:16:47.526498    6638 fix.go:229] Guest: 2024-07-18 21:16:47.314287088 -0700 PDT Remote: 2024-07-18 21:16:47.463715 -0700 PDT m=+21.175778335 (delta=-149.427912ms)
	I0718 21:16:47.526508    6638 fix.go:200] guest clock delta is within tolerance: -149.427912ms
	I0718 21:16:47.526512    6638 start.go:83] releasing machines lock for "stopped-upgrade-465000", held for 21.120194208s
	I0718 21:16:47.526572    6638 ssh_runner.go:195] Run: cat /version.json
	I0718 21:16:47.526583    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	I0718 21:16:47.526572    6638 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 21:16:47.526614    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	W0718 21:16:47.527192    6638 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50500: connect: connection refused
	I0718 21:16:47.527214    6638 retry.go:31] will retry after 325.508ms: dial tcp [::1]:50500: connect: connection refused
	W0718 21:16:47.558128    6638 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0718 21:16:47.558179    6638 ssh_runner.go:195] Run: systemctl --version
	I0718 21:16:47.559910    6638 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0718 21:16:47.561606    6638 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 21:16:47.561629    6638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0718 21:16:47.564548    6638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0718 21:16:47.569373    6638 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 21:16:47.569383    6638 start.go:495] detecting cgroup driver to use...
	I0718 21:16:47.569457    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:16:47.576104    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0718 21:16:47.580139    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 21:16:47.583596    6638 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 21:16:47.583624    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 21:16:47.586673    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:16:47.589493    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 21:16:47.592613    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:16:47.596096    6638 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 21:16:47.599409    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 21:16:47.602279    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 21:16:47.605046    6638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 21:16:47.608469    6638 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 21:16:47.611578    6638 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 21:16:47.614397    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:47.685016    6638 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 21:16:47.691741    6638 start.go:495] detecting cgroup driver to use...
	I0718 21:16:47.691806    6638 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 21:16:47.700578    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:16:47.705266    6638 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 21:16:47.711569    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:16:47.716038    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:16:47.720674    6638 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 21:16:47.760840    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:16:47.766006    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:16:47.771350    6638 ssh_runner.go:195] Run: which cri-dockerd
	I0718 21:16:47.772606    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 21:16:47.775548    6638 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 21:16:47.780611    6638 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 21:16:47.845211    6638 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 21:16:47.909576    6638 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 21:16:47.909636    6638 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 21:16:47.914717    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:47.976509    6638 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 21:16:49.106384    6638 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.129891916s)
	I0718 21:16:49.106438    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 21:16:49.111243    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 21:16:49.115593    6638 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 21:16:49.180865    6638 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 21:16:49.245892    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:49.308735    6638 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 21:16:49.314828    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 21:16:49.319106    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:49.382797    6638 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 21:16:49.422211    6638 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 21:16:49.422293    6638 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 21:16:49.425057    6638 start.go:563] Will wait 60s for crictl version
	I0718 21:16:49.425107    6638 ssh_runner.go:195] Run: which crictl
	I0718 21:16:49.426475    6638 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 21:16:49.440389    6638 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0718 21:16:49.440459    6638 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 21:16:49.456097    6638 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 21:16:49.478584    6638 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0718 21:16:49.478647    6638 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0718 21:16:49.479985    6638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 21:16:49.484037    6638 kubeadm.go:883] updating cluster {Name:stopped-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50535 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0718 21:16:49.484088    6638 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0718 21:16:49.484129    6638 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 21:16:49.494427    6638 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 21:16:49.494436    6638 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0718 21:16:49.494483    6638 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 21:16:49.497466    6638 ssh_runner.go:195] Run: which lz4
	I0718 21:16:49.498810    6638 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0718 21:16:49.500097    6638 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0718 21:16:49.500107    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0718 21:16:50.381083    6638 docker.go:649] duration metric: took 882.324792ms to copy over tarball
	I0718 21:16:50.381146    6638 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0718 21:16:51.544903    6638 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.163778s)
	I0718 21:16:51.544918    6638 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0718 21:16:51.560813    6638 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0718 21:16:51.564311    6638 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0718 21:16:51.569483    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:51.636377    6638 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 21:16:53.376435    6638 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.740088916s)
	I0718 21:16:53.376541    6638 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 21:16:53.398677    6638 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0718 21:16:53.398686    6638 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0718 21:16:53.398691    6638 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0718 21:16:53.403505    6638 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:16:53.405605    6638 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:16:53.407648    6638 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0718 21:16:53.407678    6638 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:16:53.416562    6638 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:16:53.418175    6638 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:16:53.418195    6638 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:16:53.418268    6638 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0718 21:16:53.419314    6638 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:16:53.419730    6638 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:16:53.420815    6638 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:16:53.420846    6638 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0718 21:16:53.421836    6638 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:16:53.422800    6638 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:16:53.423724    6638 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0718 21:16:53.423766    6638 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:16:53.839486    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0718 21:16:53.848667    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:16:53.851004    6638 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0718 21:16:53.851036    6638 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0718 21:16:53.851075    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0718 21:16:53.860648    6638 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0718 21:16:53.860672    6638 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:16:53.860733    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0718 21:16:53.864373    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0718 21:16:53.864484    6638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0718 21:16:53.871350    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:16:53.878485    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0718 21:16:53.878522    6638 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0718 21:16:53.878541    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0718 21:16:53.883394    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:16:53.886159    6638 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0718 21:16:53.886170    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0718 21:16:53.896885    6638 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0718 21:16:53.896904    6638 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:16:53.896961    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0718 21:16:53.899048    6638 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0718 21:16:53.899066    6638 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0718 21:16:53.899106    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0718 21:16:53.915304    6638 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0718 21:16:53.915432    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:16:53.933603    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:16:53.934656    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0718 21:16:53.936179    6638 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0718 21:16:53.936213    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0718 21:16:53.936228    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0718 21:16:53.941494    6638 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0718 21:16:53.941515    6638 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:16:53.941570    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0718 21:16:53.964223    6638 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0718 21:16:53.964243    6638 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:16:53.964304    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0718 21:16:53.964593    6638 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0718 21:16:53.964605    6638 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0718 21:16:53.964628    6638 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0718 21:16:53.964646    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0718 21:16:53.964734    6638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0718 21:16:53.974611    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0718 21:16:53.977635    6638 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0718 21:16:53.977652    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0718 21:16:53.977733    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0718 21:16:53.977833    6638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0718 21:16:53.979620    6638 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0718 21:16:53.979640    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0718 21:16:54.027338    6638 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0718 21:16:54.027447    6638 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:16:54.044360    6638 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0718 21:16:54.044374    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0718 21:16:54.059296    6638 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0718 21:16:54.059321    6638 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:16:54.059384    6638 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:16:54.131090    6638 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0718 21:16:54.131110    6638 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0718 21:16:54.131221    6638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0718 21:16:54.143564    6638 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0718 21:16:54.143597    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0718 21:16:54.209879    6638 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0718 21:16:54.209894    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0718 21:16:54.549218    6638 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0718 21:16:54.549240    6638 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0718 21:16:54.549246    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0718 21:16:54.694873    6638 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0718 21:16:54.694910    6638 cache_images.go:92] duration metric: took 1.296250833s to LoadCachedImages
	W0718 21:16:54.694953    6638 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0718 21:16:54.694959    6638 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0718 21:16:54.695015    6638 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-465000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 21:16:54.695086    6638 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 21:16:54.710465    6638 cni.go:84] Creating CNI manager for ""
	I0718 21:16:54.710481    6638 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:16:54.710487    6638 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0718 21:16:54.710496    6638 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-465000 NodeName:stopped-upgrade-465000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 21:16:54.710571    6638 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-465000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 21:16:54.710641    6638 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0718 21:16:54.713688    6638 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 21:16:54.713741    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0718 21:16:54.716696    6638 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0718 21:16:54.723065    6638 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 21:16:54.729113    6638 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0718 21:16:54.735407    6638 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0718 21:16:54.736829    6638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 21:16:54.740864    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:16:54.796831    6638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 21:16:54.806186    6638 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000 for IP: 10.0.2.15
	I0718 21:16:54.806197    6638 certs.go:194] generating shared ca certs ...
	I0718 21:16:54.806207    6638 certs.go:226] acquiring lock for ca certs: {Name:mka1e103148436c3b254df3e529d04393376ce0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:16:54.806384    6638 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key
	I0718 21:16:54.806424    6638 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key
	I0718 21:16:54.806429    6638 certs.go:256] generating profile certs ...
	I0718 21:16:54.806496    6638 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/client.key
	I0718 21:16:54.806521    6638 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key.37665e56
	I0718 21:16:54.806542    6638 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt.37665e56 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0718 21:16:55.173763    6638 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt.37665e56 ...
	I0718 21:16:55.173780    6638 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt.37665e56: {Name:mka167769f81b4d9e2e558c8fdd5ced3a7d6c8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:16:55.174066    6638 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key.37665e56 ...
	I0718 21:16:55.174071    6638 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key.37665e56: {Name:mkcf6bc32bd8f1298ab3848ad38b38515e044eff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:16:55.174223    6638 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt.37665e56 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt
	I0718 21:16:55.174367    6638 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key.37665e56 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key
	I0718 21:16:55.174516    6638 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/proxy-client.key
	I0718 21:16:55.174703    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem (1338 bytes)
	W0718 21:16:55.174733    6638 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712_empty.pem, impossibly tiny 0 bytes
	I0718 21:16:55.174742    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca-key.pem (1675 bytes)
	I0718 21:16:55.174769    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem (1082 bytes)
	I0718 21:16:55.174796    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem (1123 bytes)
	I0718 21:16:55.174822    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/key.pem (1679 bytes)
	I0718 21:16:55.174879    6638 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem (1708 bytes)
	I0718 21:16:55.175258    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 21:16:55.182681    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 21:16:55.190227    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 21:16:55.197106    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 21:16:55.204521    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0718 21:16:55.211634    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 21:16:55.219111    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 21:16:55.226230    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0718 21:16:55.232860    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 21:16:55.239962    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/1712.pem --> /usr/share/ca-certificates/1712.pem (1338 bytes)
	I0718 21:16:55.247070    6638 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/ssl/certs/17122.pem --> /usr/share/ca-certificates/17122.pem (1708 bytes)
	I0718 21:16:55.253663    6638 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 21:16:55.258592    6638 ssh_runner.go:195] Run: openssl version
	I0718 21:16:55.260283    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 21:16:55.263498    6638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:16:55.265028    6638 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:25 /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:16:55.265048    6638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:16:55.266748    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 21:16:55.269494    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1712.pem && ln -fs /usr/share/ca-certificates/1712.pem /etc/ssl/certs/1712.pem"
	I0718 21:16:55.272765    6638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1712.pem
	I0718 21:16:55.274050    6638 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:32 /usr/share/ca-certificates/1712.pem
	I0718 21:16:55.274079    6638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1712.pem
	I0718 21:16:55.275698    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1712.pem /etc/ssl/certs/51391683.0"
	I0718 21:16:55.278581    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17122.pem && ln -fs /usr/share/ca-certificates/17122.pem /etc/ssl/certs/17122.pem"
	I0718 21:16:55.281397    6638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17122.pem
	I0718 21:16:55.282800    6638 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:32 /usr/share/ca-certificates/17122.pem
	I0718 21:16:55.282820    6638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17122.pem
	I0718 21:16:55.284489    6638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17122.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 21:16:55.287996    6638 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 21:16:55.289446    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0718 21:16:55.291920    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0718 21:16:55.293929    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0718 21:16:55.295813    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0718 21:16:55.297620    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0718 21:16:55.299318    6638 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0718 21:16:55.301204    6638 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50535 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0718 21:16:55.301275    6638 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 21:16:55.314469    6638 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 21:16:55.317597    6638 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0718 21:16:55.317604    6638 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0718 21:16:55.317627    6638 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0718 21:16:55.320674    6638 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0718 21:16:55.320963    6638 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-465000" does not appear in /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:16:55.321062    6638 kubeconfig.go:62] /Users/jenkins/minikube-integration/19302-1213/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-465000" cluster setting kubeconfig missing "stopped-upgrade-465000" context setting]
	I0718 21:16:55.321262    6638 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:16:55.321692    6638 kapi.go:59] client config for stopped-upgrade-465000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101c0f790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 21:16:55.322001    6638 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0718 21:16:55.324627    6638 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-465000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0718 21:16:55.324637    6638 kubeadm.go:1160] stopping kube-system containers ...
	I0718 21:16:55.324679    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 21:16:55.335291    6638 docker.go:483] Stopping containers: [8dfb9b191dbc af09e6d0a161 727d33ccdf8e 356bfe220705 874999ffa41b cd477da80381 97155289b259 ccbbd707a9a3]
	I0718 21:16:55.335374    6638 ssh_runner.go:195] Run: docker stop 8dfb9b191dbc af09e6d0a161 727d33ccdf8e 356bfe220705 874999ffa41b cd477da80381 97155289b259 ccbbd707a9a3
	I0718 21:16:55.345831    6638 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0718 21:16:55.351560    6638 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 21:16:55.354371    6638 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 21:16:55.354378    6638 kubeadm.go:157] found existing configuration files:
	
	I0718 21:16:55.354408    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/admin.conf
	I0718 21:16:55.356875    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 21:16:55.356907    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 21:16:55.359936    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/kubelet.conf
	I0718 21:16:55.362843    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 21:16:55.362865    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 21:16:55.365423    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/controller-manager.conf
	I0718 21:16:55.368286    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 21:16:55.368311    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 21:16:55.371541    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/scheduler.conf
	I0718 21:16:55.374282    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 21:16:55.374313    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 21:16:55.376723    6638 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 21:16:55.380083    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:16:55.403544    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:16:55.910111    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:16:56.025319    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:16:56.045963    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:16:56.066690    6638 api_server.go:52] waiting for apiserver process to appear ...
	I0718 21:16:56.066764    6638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:16:56.568888    6638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:16:57.068814    6638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:16:57.073911    6638 api_server.go:72] duration metric: took 1.007248625s to wait for apiserver process to appear ...
	I0718 21:16:57.073922    6638 api_server.go:88] waiting for apiserver healthz status ...
	I0718 21:16:57.073931    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:02.075936    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:02.075978    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:07.076073    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:07.076121    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:12.076649    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:12.076698    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:17.077167    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:17.077186    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:22.077676    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:22.077707    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:27.078443    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:27.078491    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:32.079486    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:32.079535    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:37.080772    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:37.080809    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:42.081016    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:42.081039    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:47.082668    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:47.082692    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:52.084755    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:52.084797    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:17:57.086998    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:17:57.087445    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:17:57.127732    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:17:57.127881    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:17:57.149570    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:17:57.149664    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:17:57.166798    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:17:57.166880    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:17:57.179365    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:17:57.179443    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:17:57.190552    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:17:57.190618    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:17:57.201023    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:17:57.201097    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:17:57.211798    6638 logs.go:276] 0 containers: []
	W0718 21:17:57.211808    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:17:57.211867    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:17:57.222489    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:17:57.222507    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:17:57.222512    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:17:57.237134    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:17:57.237146    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:17:57.252388    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:17:57.252399    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:17:57.270235    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:17:57.270246    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:17:57.282071    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:17:57.282082    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:17:57.321225    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:17:57.321232    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:17:57.432239    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:17:57.432254    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:17:57.474218    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:17:57.474236    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:17:57.494865    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:17:57.494875    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:17:57.516712    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:17:57.516722    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:17:57.528013    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:17:57.528026    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:17:57.539797    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:17:57.539810    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:17:57.544228    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:17:57.544235    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:17:57.561249    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:17:57.561260    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:17:57.572601    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:17:57.572611    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:17:57.596668    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:17:57.596676    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:17:57.608728    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:17:57.608738    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:00.126285    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:05.128474    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:05.128591    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:05.139521    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:05.139605    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:05.150923    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:05.151000    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:05.161891    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:05.161970    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:05.172291    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:05.172366    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:05.182775    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:05.182843    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:05.193782    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:05.193853    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:05.204205    6638 logs.go:276] 0 containers: []
	W0718 21:18:05.204219    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:05.204286    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:05.215253    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:05.215272    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:05.215277    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:05.232016    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:05.232027    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:05.243471    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:05.243483    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:05.254988    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:05.255001    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:05.295283    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:05.295294    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:05.335345    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:05.335355    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:05.347369    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:05.347380    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:05.365427    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:05.365443    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:05.405188    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:05.405200    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:05.420254    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:05.420269    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:05.434036    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:05.434046    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:05.448750    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:05.448762    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:05.460564    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:05.460574    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:05.477680    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:05.477689    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:05.492347    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:05.492358    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:05.516728    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:05.516738    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:05.520804    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:05.520814    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:08.035140    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:13.037223    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:13.037483    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:13.055692    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:13.055775    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:13.069284    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:13.069361    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:13.080558    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:13.080627    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:13.091022    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:13.091087    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:13.101781    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:13.101853    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:13.111985    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:13.112050    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:13.122543    6638 logs.go:276] 0 containers: []
	W0718 21:18:13.122554    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:13.122614    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:13.132584    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:13.132603    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:13.132608    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:13.147004    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:13.147017    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:13.158373    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:13.158382    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:13.182924    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:13.182930    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:13.198112    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:13.198122    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:13.213080    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:13.213089    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:13.225070    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:13.225084    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:13.263602    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:13.263617    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:13.281543    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:13.281553    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:13.321536    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:13.321547    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:13.335503    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:13.335513    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:13.347989    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:13.348002    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:13.367248    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:13.367264    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:13.385674    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:13.385689    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:13.402760    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:13.402773    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:13.407483    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:13.407493    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:13.442645    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:13.442662    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:15.956929    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:20.959180    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:20.959315    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:20.970800    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:20.970878    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:20.981514    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:20.981580    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:20.992052    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:20.992116    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:21.002468    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:21.002534    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:21.012767    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:21.012832    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:21.023520    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:21.023595    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:21.033861    6638 logs.go:276] 0 containers: []
	W0718 21:18:21.033875    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:21.033933    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:21.044390    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:21.044405    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:21.044410    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:21.059367    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:21.059383    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:21.099025    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:21.099036    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:21.112649    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:21.112659    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:21.124542    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:21.124556    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:21.138528    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:21.138539    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:21.149618    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:21.149630    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:21.188645    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:21.188654    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:21.192577    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:21.192585    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:21.203443    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:21.203454    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:21.217815    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:21.217827    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:21.232998    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:21.233010    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:21.258018    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:21.258026    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:21.272318    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:21.272328    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:21.289437    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:21.289452    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:21.300685    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:21.300699    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:21.314427    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:21.314438    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:23.855040    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:28.857228    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:28.857422    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:28.878622    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:28.878727    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:28.892847    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:28.892925    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:28.905182    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:28.905250    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:28.919238    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:28.919308    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:28.929570    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:28.929645    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:28.946697    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:28.946773    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:28.956484    6638 logs.go:276] 0 containers: []
	W0718 21:18:28.956497    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:28.956548    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:28.966963    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:28.966980    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:28.966986    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:29.001995    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:29.002005    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:29.026132    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:29.026142    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:29.064651    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:29.064662    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:29.078736    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:29.078745    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:29.092972    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:29.092982    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:29.104935    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:29.104945    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:29.117615    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:29.117628    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:29.135530    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:29.135542    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:29.146802    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:29.146813    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:29.163199    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:29.163209    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:29.178081    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:29.178092    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:29.216930    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:29.216938    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:29.221457    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:29.221467    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:29.233845    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:29.233856    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:29.248583    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:29.248593    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:29.271633    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:29.271640    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:31.785136    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:36.786435    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:36.786651    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:36.811186    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:36.811299    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:36.827281    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:36.827359    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:36.840698    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:36.840767    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:36.852117    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:36.852190    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:36.863528    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:36.863588    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:36.874172    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:36.874237    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:36.889705    6638 logs.go:276] 0 containers: []
	W0718 21:18:36.889717    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:36.889772    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:36.900639    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:36.900658    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:36.900663    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:36.915397    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:36.915406    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:36.926626    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:36.926637    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:36.949824    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:36.949832    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:36.986068    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:36.986075    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:36.999934    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:36.999944    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:37.011502    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:37.011512    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:37.022703    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:37.022713    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:37.040360    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:37.040370    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:37.075274    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:37.075287    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:37.089931    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:37.089942    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:37.103553    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:37.103561    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:37.118133    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:37.118142    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:37.129894    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:37.129908    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:37.133903    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:37.133910    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:37.172583    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:37.172597    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:37.187588    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:37.187603    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:39.700643    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:44.702850    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:44.703003    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:44.720774    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:44.720862    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:44.734293    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:44.734370    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:44.745337    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:44.745397    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:44.755822    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:44.755886    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:44.766272    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:44.766329    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:44.776775    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:44.776860    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:44.790095    6638 logs.go:276] 0 containers: []
	W0718 21:18:44.790104    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:44.790156    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:44.802424    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:44.802440    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:44.802445    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:44.818325    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:44.818362    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:44.829547    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:44.829558    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:44.840999    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:44.841009    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:44.855569    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:44.855578    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:44.869508    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:44.869518    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:44.883931    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:44.883945    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:44.895963    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:44.895979    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:44.908127    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:44.908139    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:44.912584    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:44.912591    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:44.946075    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:44.946090    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:44.963467    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:44.963478    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:44.986612    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:44.986619    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:44.998299    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:44.998309    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:45.037009    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:45.037017    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:45.075386    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:45.075398    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:45.091999    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:45.092011    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:47.615406    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:18:52.617557    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:18:52.617771    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:18:52.644399    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:18:52.644493    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:18:52.658941    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:18:52.659024    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:18:52.671515    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:18:52.671579    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:18:52.682163    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:18:52.682231    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:18:52.693286    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:18:52.693349    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:18:52.704552    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:18:52.704615    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:18:52.716189    6638 logs.go:276] 0 containers: []
	W0718 21:18:52.716200    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:18:52.716250    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:18:52.726686    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:18:52.726703    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:18:52.726708    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:18:52.741469    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:18:52.741481    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:18:52.753132    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:18:52.753145    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:18:52.791935    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:18:52.791942    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:18:52.831446    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:18:52.831457    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:18:52.845325    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:18:52.845335    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:18:52.857553    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:18:52.857564    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:18:52.872003    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:18:52.872012    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:18:52.888758    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:18:52.888768    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:18:52.903339    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:18:52.903350    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:18:52.927845    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:18:52.927853    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:18:52.941513    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:18:52.941524    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:18:52.953944    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:18:52.953954    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:18:52.988055    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:18:52.988066    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:18:52.999735    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:18:52.999746    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:18:53.004350    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:18:53.004356    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:18:53.016013    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:18:53.016021    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:18:55.529421    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:00.531614    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:00.531779    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:00.545805    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:00.545886    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:00.557224    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:00.557297    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:00.567619    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:00.567685    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:00.580060    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:00.580139    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:00.591200    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:00.591267    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:00.602034    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:00.602105    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:00.612056    6638 logs.go:276] 0 containers: []
	W0718 21:19:00.612067    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:00.612126    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:00.623600    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:00.623617    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:00.623623    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:00.637629    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:00.637639    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:00.652190    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:00.652200    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:00.667487    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:00.667497    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:00.692631    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:00.692648    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:00.733092    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:00.733105    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:00.737292    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:00.737298    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:00.748698    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:00.748709    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:00.760494    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:00.760506    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:00.774342    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:00.774352    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:00.817707    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:00.817718    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:00.830815    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:00.830826    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:00.842442    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:00.842453    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:00.854760    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:00.854770    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:00.890237    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:00.890247    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:00.930396    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:00.930407    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:00.944580    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:00.944591    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:03.465386    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:08.467587    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:08.467809    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:08.484504    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:08.484590    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:08.497395    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:08.497462    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:08.508798    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:08.508867    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:08.519361    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:08.519428    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:08.529955    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:08.530018    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:08.541054    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:08.541114    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:08.551474    6638 logs.go:276] 0 containers: []
	W0718 21:19:08.551486    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:08.551538    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:08.562597    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:08.562618    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:08.562623    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:08.582969    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:08.582980    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:08.597504    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:08.597516    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:08.609125    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:08.609135    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:08.620168    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:08.620179    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:08.634934    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:08.634946    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:08.659865    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:08.659875    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:08.674514    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:08.674527    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:08.693053    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:08.693062    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:08.705525    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:08.705540    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:08.716772    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:08.716782    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:08.740173    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:08.740185    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:08.754268    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:08.754280    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:08.792385    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:08.792398    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:08.807412    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:08.807423    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:08.843412    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:08.843425    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:08.848226    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:08.848236    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:11.388783    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:16.390999    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:16.391516    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:16.428975    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:16.429122    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:16.448922    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:16.449029    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:16.468938    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:16.469019    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:16.480782    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:16.480855    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:16.491402    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:16.491467    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:16.503162    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:16.503230    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:16.517866    6638 logs.go:276] 0 containers: []
	W0718 21:19:16.517880    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:16.517936    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:16.528419    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:16.528438    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:16.528444    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:16.565743    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:16.565755    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:16.601041    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:16.601053    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:16.620087    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:16.620101    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:16.631375    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:16.631390    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:16.654928    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:16.654937    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:16.666735    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:16.666748    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:16.684359    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:16.684369    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:16.697639    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:16.697650    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:16.712238    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:16.712250    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:16.751045    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:16.751055    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:16.761952    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:16.761962    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:16.765981    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:16.765990    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:16.780717    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:16.780726    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:16.793284    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:16.793295    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:16.807927    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:16.807940    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:16.822615    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:16.822625    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:19.335889    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:24.338283    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:24.338684    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:24.372132    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:24.372264    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:24.390515    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:24.390614    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:24.414739    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:24.414812    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:24.430600    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:24.430681    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:24.441370    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:24.441446    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:24.452544    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:24.452622    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:24.463294    6638 logs.go:276] 0 containers: []
	W0718 21:19:24.463306    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:24.463359    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:24.473774    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:24.473793    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:24.473798    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:24.496489    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:24.496497    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:24.508036    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:24.508049    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:24.519466    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:24.519477    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:24.531021    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:24.531031    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:24.542476    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:24.542489    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:24.556186    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:24.556196    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:24.573428    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:24.573437    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:24.584598    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:24.584612    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:24.621828    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:24.621842    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:24.633306    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:24.633319    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:24.648018    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:24.648029    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:24.685822    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:24.685837    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:24.720063    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:24.720075    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:24.735420    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:24.735434    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:24.740294    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:24.740302    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:24.755330    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:24.755343    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:27.271548    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:32.274018    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:32.274252    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:32.296560    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:32.296659    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:32.312532    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:32.312604    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:32.325359    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:32.325438    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:32.336903    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:32.336976    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:32.347253    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:32.347325    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:32.358006    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:32.358069    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:32.368424    6638 logs.go:276] 0 containers: []
	W0718 21:19:32.368437    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:32.368487    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:32.378670    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:32.378689    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:32.378694    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:32.382990    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:32.382999    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:32.420953    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:32.420964    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:32.436123    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:32.436134    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:32.453968    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:32.453982    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:32.466195    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:32.466208    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:32.491127    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:32.491143    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:32.506605    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:32.506617    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:32.527848    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:32.527861    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:32.539930    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:32.539942    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:32.579469    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:32.579478    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:32.593923    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:32.593936    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:32.630259    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:32.630271    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:32.646084    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:32.646093    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:32.657940    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:32.657951    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:32.669213    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:32.669223    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:32.682496    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:32.682513    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:35.195859    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:40.198069    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:40.198534    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:40.238506    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:40.238648    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:40.259207    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:40.259302    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:40.274408    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:40.274485    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:40.286991    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:40.287062    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:40.297521    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:40.297585    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:40.308316    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:40.308381    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:40.326001    6638 logs.go:276] 0 containers: []
	W0718 21:19:40.326013    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:40.326068    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:40.336683    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:40.336703    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:40.336708    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:40.348508    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:40.348520    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:40.362842    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:40.362852    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:40.387365    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:40.387372    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:40.403214    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:40.403224    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:40.417817    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:40.417827    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:40.431567    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:40.431578    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:40.443357    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:40.443367    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:40.457008    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:40.457020    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:40.469869    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:40.469879    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:40.507881    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:40.507889    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:40.512296    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:40.512302    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:40.523570    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:40.523580    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:40.541087    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:40.541097    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:40.556535    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:40.556547    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:40.568343    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:40.568355    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:40.603778    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:40.603789    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:43.143619    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:48.145876    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:48.146299    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:48.173791    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:48.173904    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:48.197486    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:48.197568    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:48.210173    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:48.210247    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:48.221569    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:48.221639    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:48.232802    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:48.232877    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:48.243716    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:48.243784    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:48.253938    6638 logs.go:276] 0 containers: []
	W0718 21:19:48.253949    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:48.254009    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:48.264273    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:48.264292    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:48.264298    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:48.278184    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:48.278194    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:48.289647    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:48.289658    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:48.301324    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:48.301335    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:48.335546    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:48.335558    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:48.349417    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:48.349429    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:48.361831    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:48.361843    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:48.373408    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:48.373419    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:48.410110    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:48.410125    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:48.421328    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:48.421338    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:48.435103    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:48.435113    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:48.452707    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:48.452718    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:48.475072    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:48.475081    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:48.487123    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:48.487134    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:48.491934    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:48.491947    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:48.530262    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:48.530271    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:48.548457    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:48.548466    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:51.066137    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:19:56.068297    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:19:56.068470    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:19:56.079639    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:19:56.079716    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:19:56.090186    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:19:56.090255    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:19:56.100587    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:19:56.100657    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:19:56.110670    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:19:56.110752    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:19:56.121073    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:19:56.121141    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:19:56.131598    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:19:56.131660    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:19:56.142467    6638 logs.go:276] 0 containers: []
	W0718 21:19:56.142483    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:19:56.142543    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:19:56.153277    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:19:56.153295    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:19:56.153300    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:19:56.167021    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:19:56.167030    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:19:56.181316    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:19:56.181329    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:19:56.217575    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:19:56.217581    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:19:56.229103    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:19:56.229112    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:19:56.243257    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:19:56.243268    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:19:56.255056    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:19:56.255068    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:19:56.280196    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:19:56.280208    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:19:56.297729    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:19:56.297741    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:19:56.334367    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:19:56.334378    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:19:56.346264    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:19:56.346275    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:19:56.360724    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:19:56.360732    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:19:56.378404    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:19:56.378416    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:19:56.392286    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:19:56.392297    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:19:56.396844    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:19:56.396852    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:19:56.436744    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:19:56.436756    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:19:56.452946    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:19:56.452957    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:19:58.967062    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:03.967811    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:03.968010    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:03.989777    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:03.989864    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:04.004348    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:04.004426    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:04.018332    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:04.018396    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:04.029435    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:04.029510    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:04.043944    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:04.044013    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:04.055707    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:04.055782    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:04.065817    6638 logs.go:276] 0 containers: []
	W0718 21:20:04.065829    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:04.065887    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:04.076257    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:04.076277    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:04.076282    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:04.080516    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:04.080525    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:04.094219    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:04.094228    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:04.111187    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:04.111198    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:04.122877    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:04.122887    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:04.138088    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:04.138097    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:04.149835    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:04.149844    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:04.164565    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:04.164575    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:04.203103    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:04.203123    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:04.218463    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:04.218477    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:04.236930    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:04.236939    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:04.262436    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:04.262448    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:04.300972    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:04.300985    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:04.313158    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:04.313169    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:04.329452    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:04.329469    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:04.342407    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:04.342420    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:04.354895    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:04.354907    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:06.896044    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:11.898440    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:11.898622    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:11.910284    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:11.910361    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:11.921224    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:11.921296    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:11.931778    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:11.931843    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:11.942531    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:11.942602    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:11.953540    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:11.953606    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:11.963701    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:11.963773    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:11.973748    6638 logs.go:276] 0 containers: []
	W0718 21:20:11.973759    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:11.973817    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:11.984792    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:11.984810    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:11.984817    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:12.000143    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:12.000157    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:12.014387    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:12.014400    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:12.053816    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:12.053829    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:12.068509    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:12.068520    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:12.108007    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:12.108019    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:12.127175    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:12.127184    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:12.147667    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:12.147680    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:12.162719    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:12.162736    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:12.175724    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:12.175741    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:12.191422    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:12.191436    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:12.203765    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:12.203778    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:12.242672    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:12.242683    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:12.258396    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:12.258404    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:12.282514    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:12.282526    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:12.296134    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:12.296146    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:12.300507    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:12.300520    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:14.815520    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:19.817763    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:19.817997    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:19.841206    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:19.841338    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:19.857720    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:19.857795    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:19.870478    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:19.870554    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:19.882128    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:19.882203    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:19.893601    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:19.893667    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:19.905680    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:19.905760    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:19.916945    6638 logs.go:276] 0 containers: []
	W0718 21:20:19.916974    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:19.917035    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:19.933170    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:19.933187    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:19.933193    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:19.970840    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:19.970852    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:20.010210    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:20.010228    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:20.025861    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:20.025869    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:20.038234    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:20.038245    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:20.079212    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:20.079229    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:20.095166    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:20.095180    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:20.107943    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:20.107954    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:20.135976    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:20.135988    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:20.148969    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:20.148984    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:20.168703    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:20.168717    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:20.187209    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:20.187220    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:20.191480    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:20.191486    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:20.206433    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:20.206450    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:20.223120    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:20.223134    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:20.235536    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:20.235547    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:20.259100    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:20.259113    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:22.772776    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:27.775049    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:27.775386    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:27.812722    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:27.812774    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:27.830107    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:27.830157    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:27.843840    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:27.843880    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:27.855806    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:27.855878    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:27.867543    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:27.867613    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:27.879070    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:27.879149    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:27.890222    6638 logs.go:276] 0 containers: []
	W0718 21:20:27.890235    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:27.890290    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:27.902109    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:27.902170    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:27.902181    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:27.917270    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:27.917281    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:27.959224    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:27.959245    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:27.973678    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:27.973688    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:27.986071    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:27.986083    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:28.002024    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:28.002036    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:28.020245    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:28.020254    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:28.059784    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:28.059803    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:28.065049    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:28.065060    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:28.080396    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:28.080407    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:28.093197    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:28.093209    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:28.110303    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:28.110317    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:28.123494    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:28.123506    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:28.135933    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:28.135943    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:28.160852    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:28.160860    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:28.176558    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:28.176568    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:28.188897    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:28.188914    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:30.726779    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:35.729161    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:35.729321    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:35.747988    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:35.748102    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:35.762433    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:35.762508    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:35.774974    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:35.775043    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:35.786546    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:35.786619    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:35.800920    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:35.800996    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:35.812189    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:35.812263    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:35.823139    6638 logs.go:276] 0 containers: []
	W0718 21:20:35.823151    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:35.823214    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:35.834527    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:35.834546    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:35.834552    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:35.850172    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:35.850184    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:35.862330    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:35.862342    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:35.878296    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:35.878310    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:35.890565    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:35.890578    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:35.913931    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:35.913949    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:35.918591    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:35.918603    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:35.967817    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:35.967830    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:35.982070    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:35.982084    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:35.997295    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:35.997308    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:36.009398    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:36.009411    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:36.027252    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:36.027267    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:36.040344    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:36.040358    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:36.052798    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:36.052811    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:36.095409    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:36.095421    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:36.130803    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:36.130816    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:36.149168    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:36.149180    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:38.663230    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:43.665350    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:43.665420    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:43.677593    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:43.677670    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:43.689759    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:43.689827    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:43.700906    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:43.700983    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:43.712032    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:43.712103    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:43.723624    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:43.723696    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:43.734577    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:43.734657    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:43.745529    6638 logs.go:276] 0 containers: []
	W0718 21:20:43.745542    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:43.745607    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:43.757259    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:43.757275    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:43.757279    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:43.798104    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:43.798121    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:43.815969    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:43.815981    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:43.831483    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:43.831496    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:43.843782    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:43.843794    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:43.848634    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:43.848645    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:43.863206    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:43.863215    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:43.878415    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:43.878423    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:43.890963    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:43.890976    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:43.906674    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:43.906686    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:43.925052    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:43.925067    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:43.944559    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:43.944574    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:43.984048    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:43.984059    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:44.002115    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:44.002128    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:44.025287    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:44.025297    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:44.039285    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:44.039295    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:44.055198    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:44.055210    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:46.591879    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:51.593923    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:51.594085    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:20:51.605174    6638 logs.go:276] 2 containers: [7df96524df83 356bfe220705]
	I0718 21:20:51.605252    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:20:51.616627    6638 logs.go:276] 2 containers: [5c6ced8e9bb6 af09e6d0a161]
	I0718 21:20:51.616703    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:20:51.627647    6638 logs.go:276] 1 containers: [4897a95ebf8b]
	I0718 21:20:51.627710    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:20:51.638802    6638 logs.go:276] 2 containers: [d691cad9f485 727d33ccdf8e]
	I0718 21:20:51.638880    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:20:51.649823    6638 logs.go:276] 1 containers: [000abdab1f01]
	I0718 21:20:51.649894    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:20:51.666186    6638 logs.go:276] 2 containers: [fa1f723730ec 8dfb9b191dbc]
	I0718 21:20:51.666254    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:20:51.682502    6638 logs.go:276] 0 containers: []
	W0718 21:20:51.682516    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:20:51.682580    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:20:51.694188    6638 logs.go:276] 2 containers: [0f5ce2993090 5765d1ced405]
	I0718 21:20:51.694212    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:20:51.694217    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:20:51.731723    6638 logs.go:123] Gathering logs for kube-apiserver [7df96524df83] ...
	I0718 21:20:51.731736    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7df96524df83"
	I0718 21:20:51.747375    6638 logs.go:123] Gathering logs for kube-scheduler [d691cad9f485] ...
	I0718 21:20:51.747388    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691cad9f485"
	I0718 21:20:51.760157    6638 logs.go:123] Gathering logs for kube-proxy [000abdab1f01] ...
	I0718 21:20:51.760165    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 000abdab1f01"
	I0718 21:20:51.775859    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:20:51.775871    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:20:51.780863    6638 logs.go:123] Gathering logs for kube-controller-manager [fa1f723730ec] ...
	I0718 21:20:51.780875    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa1f723730ec"
	I0718 21:20:51.804003    6638 logs.go:123] Gathering logs for storage-provisioner [0f5ce2993090] ...
	I0718 21:20:51.804019    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f5ce2993090"
	I0718 21:20:51.817409    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:20:51.817419    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:20:51.829551    6638 logs.go:123] Gathering logs for kube-apiserver [356bfe220705] ...
	I0718 21:20:51.829563    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 356bfe220705"
	I0718 21:20:51.867379    6638 logs.go:123] Gathering logs for etcd [af09e6d0a161] ...
	I0718 21:20:51.867391    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af09e6d0a161"
	I0718 21:20:51.885999    6638 logs.go:123] Gathering logs for kube-scheduler [727d33ccdf8e] ...
	I0718 21:20:51.886009    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 727d33ccdf8e"
	I0718 21:20:51.900751    6638 logs.go:123] Gathering logs for kube-controller-manager [8dfb9b191dbc] ...
	I0718 21:20:51.900762    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfb9b191dbc"
	I0718 21:20:51.916003    6638 logs.go:123] Gathering logs for storage-provisioner [5765d1ced405] ...
	I0718 21:20:51.916012    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5765d1ced405"
	I0718 21:20:51.927763    6638 logs.go:123] Gathering logs for etcd [5c6ced8e9bb6] ...
	I0718 21:20:51.927774    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6ced8e9bb6"
	I0718 21:20:51.941762    6638 logs.go:123] Gathering logs for coredns [4897a95ebf8b] ...
	I0718 21:20:51.941776    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4897a95ebf8b"
	I0718 21:20:51.953137    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:20:51.953149    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:20:51.975837    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:20:51.975846    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:20:54.516264    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:20:59.518416    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:20:59.518449    6638 kubeadm.go:597] duration metric: took 4m4.207913125s to restartPrimaryControlPlane
	W0718 21:20:59.518482    6638 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0718 21:20:59.518495    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0718 21:21:00.558455    6638 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.039978s)
	I0718 21:21:00.558511    6638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 21:21:00.563416    6638 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 21:21:00.566207    6638 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 21:21:00.568954    6638 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 21:21:00.568959    6638 kubeadm.go:157] found existing configuration files:
	
	I0718 21:21:00.569000    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/admin.conf
	I0718 21:21:00.571454    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 21:21:00.571479    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 21:21:00.574243    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/kubelet.conf
	I0718 21:21:00.577071    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 21:21:00.577091    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 21:21:00.579739    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/controller-manager.conf
	I0718 21:21:00.582781    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 21:21:00.582808    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 21:21:00.586135    6638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/scheduler.conf
	I0718 21:21:00.588991    6638 kubeadm.go:163] "https://control-plane.minikube.internal:50535" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50535 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 21:21:00.589015    6638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 21:21:00.591715    6638 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0718 21:21:00.609028    6638 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0718 21:21:00.609059    6638 kubeadm.go:310] [preflight] Running pre-flight checks
	I0718 21:21:00.660759    6638 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0718 21:21:00.660809    6638 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0718 21:21:00.660865    6638 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0718 21:21:00.714491    6638 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 21:21:00.722838    6638 out.go:204]   - Generating certificates and keys ...
	I0718 21:21:00.722870    6638 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0718 21:21:00.722909    6638 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0718 21:21:00.722957    6638 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0718 21:21:00.722993    6638 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0718 21:21:00.723034    6638 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0718 21:21:00.723065    6638 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0718 21:21:00.723098    6638 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0718 21:21:00.723126    6638 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0718 21:21:00.723162    6638 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0718 21:21:00.723202    6638 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0718 21:21:00.723224    6638 kubeadm.go:310] [certs] Using the existing "sa" key
	I0718 21:21:00.723252    6638 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 21:21:00.835794    6638 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 21:21:00.921562    6638 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 21:21:00.969776    6638 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 21:21:01.070162    6638 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 21:21:01.097794    6638 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 21:21:01.098257    6638 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 21:21:01.098288    6638 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0718 21:21:01.171646    6638 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 21:21:01.175816    6638 out.go:204]   - Booting up control plane ...
	I0718 21:21:01.175865    6638 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 21:21:01.175907    6638 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 21:21:01.175942    6638 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 21:21:01.175980    6638 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 21:21:01.176099    6638 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0718 21:21:05.676346    6638 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504437 seconds
	I0718 21:21:05.676437    6638 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0718 21:21:05.681924    6638 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0718 21:21:06.192729    6638 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0718 21:21:06.192925    6638 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-465000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0718 21:21:06.696520    6638 kubeadm.go:310] [bootstrap-token] Using token: 7z5uzo.dwmxbixp3b0364hf
	I0718 21:21:06.703019    6638 out.go:204]   - Configuring RBAC rules ...
	I0718 21:21:06.703090    6638 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0718 21:21:06.703141    6638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0718 21:21:06.707987    6638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0718 21:21:06.708874    6638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0718 21:21:06.709782    6638 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0718 21:21:06.710789    6638 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0718 21:21:06.713892    6638 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0718 21:21:06.879047    6638 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0718 21:21:07.099953    6638 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0718 21:21:07.100328    6638 kubeadm.go:310] 
	I0718 21:21:07.100358    6638 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0718 21:21:07.100361    6638 kubeadm.go:310] 
	I0718 21:21:07.100396    6638 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0718 21:21:07.100419    6638 kubeadm.go:310] 
	I0718 21:21:07.100435    6638 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0718 21:21:07.100478    6638 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0718 21:21:07.100506    6638 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0718 21:21:07.100511    6638 kubeadm.go:310] 
	I0718 21:21:07.100539    6638 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0718 21:21:07.100542    6638 kubeadm.go:310] 
	I0718 21:21:07.100567    6638 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0718 21:21:07.100570    6638 kubeadm.go:310] 
	I0718 21:21:07.100609    6638 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0718 21:21:07.100651    6638 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0718 21:21:07.100697    6638 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0718 21:21:07.100708    6638 kubeadm.go:310] 
	I0718 21:21:07.100757    6638 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0718 21:21:07.100794    6638 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0718 21:21:07.100797    6638 kubeadm.go:310] 
	I0718 21:21:07.100838    6638 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7z5uzo.dwmxbixp3b0364hf \
	I0718 21:21:07.100899    6638 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc \
	I0718 21:21:07.100914    6638 kubeadm.go:310] 	--control-plane 
	I0718 21:21:07.100917    6638 kubeadm.go:310] 
	I0718 21:21:07.100959    6638 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0718 21:21:07.100963    6638 kubeadm.go:310] 
	I0718 21:21:07.101009    6638 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7z5uzo.dwmxbixp3b0364hf \
	I0718 21:21:07.101054    6638 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f34450f9c1a10cedbcf1ffe2cb2181bc57c8f7371f4e0c6c2f5596c345693ebc 
	I0718 21:21:07.101184    6638 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0718 21:21:07.101252    6638 cni.go:84] Creating CNI manager for ""
	I0718 21:21:07.101262    6638 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:21:07.105508    6638 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0718 21:21:07.113356    6638 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0718 21:21:07.116193    6638 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0718 21:21:07.120926    6638 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 21:21:07.121003    6638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0718 21:21:07.121009    6638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-465000 minikube.k8s.io/updated_at=2024_07_18T21_21_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=stopped-upgrade-465000 minikube.k8s.io/primary=true
	I0718 21:21:07.160980    6638 ops.go:34] apiserver oom_adj: -16
	I0718 21:21:07.160984    6638 kubeadm.go:1113] duration metric: took 40.0165ms to wait for elevateKubeSystemPrivileges
	I0718 21:21:07.160994    6638 kubeadm.go:394] duration metric: took 4m11.867085417s to StartCluster
	I0718 21:21:07.161006    6638 settings.go:142] acquiring lock: {Name:mk9577e2a46ebc5e017130011eb528f9fea1ed10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:21:07.161099    6638 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:21:07.161521    6638 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/kubeconfig: {Name:mkf56373be3902a9bdffa8fbef084edcda35f111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:21:07.161896    6638 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:21:07.161900    6638 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 21:21:07.161933    6638 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-465000"
	I0718 21:21:07.161949    6638 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-465000"
	W0718 21:21:07.161954    6638 addons.go:243] addon storage-provisioner should already be in state true
	I0718 21:21:07.161966    6638 host.go:66] Checking if "stopped-upgrade-465000" exists ...
	I0718 21:21:07.161967    6638 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-465000"
	I0718 21:21:07.161983    6638 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-465000"
	I0718 21:21:07.162020    6638 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:21:07.165577    6638 out.go:177] * Verifying Kubernetes components...
	I0718 21:21:07.166262    6638 kapi.go:59] client config for stopped-upgrade-465000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/stopped-upgrade-465000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1213/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101c0f790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 21:21:07.169709    6638 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-465000"
	W0718 21:21:07.169714    6638 addons.go:243] addon default-storageclass should already be in state true
	I0718 21:21:07.169721    6638 host.go:66] Checking if "stopped-upgrade-465000" exists ...
	I0718 21:21:07.170245    6638 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0718 21:21:07.170250    6638 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0718 21:21:07.170256    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	I0718 21:21:07.173483    6638 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:21:07.177355    6638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:21:07.181567    6638 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 21:21:07.181574    6638 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0718 21:21:07.181580    6638 sshutil.go:53] new ssh client: &{IP:localhost Port:50500 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/stopped-upgrade-465000/id_rsa Username:docker}
	I0718 21:21:07.258891    6638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 21:21:07.265115    6638 api_server.go:52] waiting for apiserver process to appear ...
	I0718 21:21:07.265178    6638 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:21:07.270538    6638 api_server.go:72] duration metric: took 108.632416ms to wait for apiserver process to appear ...
	I0718 21:21:07.270547    6638 api_server.go:88] waiting for apiserver healthz status ...
	I0718 21:21:07.270556    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:07.275737    6638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0718 21:21:07.328247    6638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0718 21:21:12.272523    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:12.272550    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:17.272661    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:17.272693    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:22.272902    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:22.272956    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:27.273436    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:27.273483    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:32.274012    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:32.274044    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:37.274652    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:37.274679    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0718 21:21:37.674123    6638 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0718 21:21:37.677170    6638 out.go:177] * Enabled addons: storage-provisioner
	I0718 21:21:37.687953    6638 addons.go:510] duration metric: took 30.5269365s for enable addons: enabled=[storage-provisioner]
	I0718 21:21:42.275526    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:42.275572    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:47.276815    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:47.276877    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:52.278481    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:52.278537    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:21:57.280503    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:21:57.280526    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:22:02.282591    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:22:02.282617    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:22:07.284684    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:22:07.284823    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:22:07.309873    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:22:07.309951    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:22:07.326303    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:22:07.326372    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:22:07.337286    6638 logs.go:276] 2 containers: [790e3823af0b 27061214a5c6]
	I0718 21:22:07.337360    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:22:07.348107    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:22:07.348183    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:22:07.359251    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:22:07.359325    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:22:07.369500    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:22:07.369563    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:22:07.380100    6638 logs.go:276] 0 containers: []
	W0718 21:22:07.380112    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:22:07.380174    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:22:07.390244    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:22:07.390259    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:22:07.390264    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:22:07.402563    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:22:07.402573    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:22:07.436906    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:22:07.436916    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:22:07.441206    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:22:07.441213    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:22:07.456533    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:22:07.456545    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:22:07.471456    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:22:07.471466    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:22:07.483325    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:22:07.483336    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:22:07.498958    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:22:07.498968    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:22:07.516884    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:22:07.516895    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:22:07.556900    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:22:07.556912    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:22:07.568470    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:22:07.568486    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:22:07.580024    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:22:07.580038    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:22:07.603483    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:22:07.603491    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:22:10.117621    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:22:15.119764    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:22:15.120165    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:22:15.155280    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:22:15.155413    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:22:15.176216    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:22:15.176343    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:22:15.191687    6638 logs.go:276] 2 containers: [790e3823af0b 27061214a5c6]
	I0718 21:22:15.191772    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:22:15.203906    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:22:15.203976    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:22:15.215008    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:22:15.215079    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:22:15.225552    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:22:15.225617    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:22:15.236526    6638 logs.go:276] 0 containers: []
	W0718 21:22:15.236537    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:22:15.236589    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:22:15.247583    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:22:15.247598    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:22:15.247602    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:22:15.259658    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:22:15.259669    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:22:15.271559    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:22:15.271568    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:22:15.289846    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:22:15.289859    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:22:15.303811    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:22:15.303820    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:22:15.308604    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:22:15.308611    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:22:15.344958    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:22:15.344968    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:22:15.359755    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:22:15.359768    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:22:15.375344    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:22:15.375353    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:22:15.399184    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:22:15.399192    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:22:15.410865    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:22:15.410874    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:22:15.447200    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:22:15.447213    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:22:15.459554    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:22:15.459564    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:22:17.977541    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:22:22.979833    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:22:22.980033    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:22:23.000200    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:22:23.000288    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:22:23.014363    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:22:23.014441    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:22:23.026147    6638 logs.go:276] 2 containers: [790e3823af0b 27061214a5c6]
	I0718 21:22:23.026223    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:22:23.037422    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:22:23.037484    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:22:23.047705    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:22:23.047774    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:22:23.058101    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:22:23.058175    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:22:23.068726    6638 logs.go:276] 0 containers: []
	W0718 21:22:23.068738    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:22:23.068806    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:22:23.078867    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:22:23.078882    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:22:23.078887    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:22:23.114828    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:22:23.114835    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:22:23.119187    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:22:23.119193    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:22:23.154353    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:22:23.154363    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:22:23.167268    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:22:23.167281    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:22:23.178853    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:22:23.178863    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:22:23.190673    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:22:23.190683    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:22:23.205771    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:22:23.205781    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:22:23.220131    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:22:23.220143    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:22:23.234509    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:22:23.234521    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:22:23.250415    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:22:23.250425    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:22:23.267689    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:22:23.267698    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:22:23.291411    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:22:23.291419    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:22:25.805167    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:22:30.807507    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:22:30.807913    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:22:30.848560    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:22:30.848691    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:22:30.874774    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:22:30.874866    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:22:30.888553    6638 logs.go:276] 2 containers: [790e3823af0b 27061214a5c6]
	I0718 21:22:30.888630    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:22:30.900170    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:22:30.900236    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:22:30.911162    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:22:30.911234    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:22:30.921573    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:22:30.921634    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:22:30.931318    6638 logs.go:276] 0 containers: []
	W0718 21:22:30.931330    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:22:30.931387    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:22:30.942015    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:22:30.942032    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:22:30.942037    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:22:30.957151    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:22:30.957163    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:22:30.968812    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:22:30.968824    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:22:30.980575    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:22:30.980586    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:22:30.997779    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:22:30.997790    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:22:31.009335    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:22:31.009346    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:22:31.020941    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:22:31.020951    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:22:31.045232    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:22:31.045239    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:22:31.080058    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:22:31.080065    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:22:31.084334    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:22:31.084341    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:22:31.128262    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:22:31.128272    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:22:31.142578    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:22:31.142591    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:22:31.157513    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:22:31.157527    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:22:33.674419    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:22:38.677007    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:22:38.677434    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:22:38.717130    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:22:38.717251    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:22:38.738154    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:22:38.738282    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:22:38.753548    6638 logs.go:276] 2 containers: [790e3823af0b 27061214a5c6]
	I0718 21:22:38.753611    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:22:38.766354    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:22:38.766417    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:22:38.777016    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:22:38.777079    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:22:38.788828    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:22:38.788893    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:22:38.799152    6638 logs.go:276] 0 containers: []
	W0718 21:22:38.799163    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:22:38.799210    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:22:38.809740    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:22:38.809753    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:22:38.809758    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:22:38.823539    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:22:38.823549    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:22:38.835076    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:22:38.835090    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:22:38.847247    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:22:38.847259    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:22:38.865038    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:22:38.865047    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:22:38.900116    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:22:38.900122    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:22:38.904137    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:22:38.904145    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:22:38.941196    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:22:38.941212    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:22:38.955910    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:22:38.955922    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:22:38.980508    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:22:38.980517    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:22:38.991682    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:22:38.991692    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:22:39.009231    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:22:39.009243    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:22:39.025935    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:22:39.025947    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:22:41.539556    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:22:46.542309    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:22:46.542762    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:22:46.586504    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:22:46.586627    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:22:46.607711    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:22:46.607801    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:22:46.622541    6638 logs.go:276] 2 containers: [790e3823af0b 27061214a5c6]
	I0718 21:22:46.622613    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:22:46.635604    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:22:46.635673    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:22:46.646842    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:22:46.646908    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:22:46.657926    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:22:46.657995    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:22:46.669010    6638 logs.go:276] 0 containers: []
	W0718 21:22:46.669022    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:22:46.669080    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:22:46.680381    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:22:46.680399    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:22:46.680405    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:22:46.716409    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:22:46.716422    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:22:46.728810    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:22:46.728821    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:22:46.748060    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:22:46.748075    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:22:46.759893    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:22:46.759904    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:22:46.771885    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:22:46.771897    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:22:46.804955    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:22:46.804965    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:22:46.808922    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:22:46.808928    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:22:46.821098    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:22:46.821110    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:22:46.837218    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:22:46.837228    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:22:46.848732    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:22:46.848741    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:22:46.873380    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:22:46.873389    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:22:46.887863    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:22:46.887877    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:22:49.404827    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:22:54.407401    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:22:54.407782    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:22:54.449430    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:22:54.449560    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:22:54.472009    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:22:54.472098    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:22:54.487376    6638 logs.go:276] 2 containers: [790e3823af0b 27061214a5c6]
	I0718 21:22:54.487446    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:22:54.499824    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:22:54.499888    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:22:54.510994    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:22:54.511058    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:22:54.521747    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:22:54.521808    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:22:54.533545    6638 logs.go:276] 0 containers: []
	W0718 21:22:54.533554    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:22:54.533602    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:22:54.544377    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:22:54.544393    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:22:54.544398    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:22:54.556728    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:22:54.556741    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:22:54.573360    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:22:54.573371    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:22:54.585855    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:22:54.585864    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:22:54.609739    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:22:54.609752    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:22:54.643862    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:22:54.643872    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:22:54.659334    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:22:54.659346    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:22:54.671349    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:22:54.671361    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:22:54.683621    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:22:54.683632    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:22:54.713549    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:22:54.713558    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:22:54.725761    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:22:54.725770    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:22:54.729845    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:22:54.729853    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:22:54.773765    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:22:54.773780    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:22:57.290517    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:23:02.292750    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:23:02.292983    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:23:02.315536    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:23:02.315646    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:23:02.330462    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:23:02.330523    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:23:02.342766    6638 logs.go:276] 2 containers: [790e3823af0b 27061214a5c6]
	I0718 21:23:02.342823    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:23:02.354029    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:23:02.354093    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:23:02.364983    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:23:02.365058    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:23:02.375577    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:23:02.375636    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:23:02.386230    6638 logs.go:276] 0 containers: []
	W0718 21:23:02.386242    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:23:02.386302    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:23:02.397542    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:23:02.397557    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:23:02.397562    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:23:02.441346    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:23:02.441360    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:23:02.458004    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:23:02.458014    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:23:02.475628    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:23:02.475637    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:23:02.492499    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:23:02.492509    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:23:02.503967    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:23:02.503977    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:23:02.520495    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:23:02.520507    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:23:02.532122    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:23:02.532133    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:23:02.554940    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:23:02.554949    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:23:02.588215    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:23:02.588222    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:23:02.592158    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:23:02.592167    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:23:02.606974    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:23:02.606986    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:23:02.621347    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:23:02.621356    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:23:05.135059    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:23:10.135801    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:23:10.136137    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:23:10.179612    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:23:10.179748    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:23:10.202782    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:23:10.202911    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:23:10.218507    6638 logs.go:276] 2 containers: [790e3823af0b 27061214a5c6]
	I0718 21:23:10.218586    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:23:10.231095    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:23:10.231165    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:23:10.242112    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:23:10.242177    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:23:10.258044    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:23:10.258104    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:23:10.269008    6638 logs.go:276] 0 containers: []
	W0718 21:23:10.269020    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:23:10.269071    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:23:10.280728    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:23:10.280743    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:23:10.280748    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:23:10.315639    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:23:10.315652    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:23:10.351814    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:23:10.351826    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:23:10.365943    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:23:10.365954    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:23:10.382870    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:23:10.382882    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:23:10.394871    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:23:10.394884    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:23:10.407992    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:23:10.408002    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:23:10.412332    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:23:10.412339    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:23:10.429174    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:23:10.429184    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:23:10.445196    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:23:10.445205    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:23:10.463331    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:23:10.463342    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:23:10.475720    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:23:10.475730    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:23:10.500869    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:23:10.500879    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:23:13.014501    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:23:18.017185    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:23:18.017550    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:23:18.064745    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:23:18.064871    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:23:18.085609    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:23:18.085704    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:23:18.100359    6638 logs.go:276] 2 containers: [790e3823af0b 27061214a5c6]
	I0718 21:23:18.100436    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:23:18.112298    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:23:18.112365    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:23:18.122987    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:23:18.123056    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:23:18.133554    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:23:18.133622    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:23:18.144214    6638 logs.go:276] 0 containers: []
	W0718 21:23:18.144225    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:23:18.144280    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:23:18.154661    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:23:18.154677    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:23:18.154683    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:23:18.166171    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:23:18.166182    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:23:18.201147    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:23:18.201155    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:23:18.213621    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:23:18.213631    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:23:18.231520    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:23:18.231531    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:23:18.256295    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:23:18.256306    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:23:18.268262    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:23:18.268272    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:23:18.299471    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:23:18.299484    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:23:18.320998    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:23:18.321014    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:23:18.337633    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:23:18.337645    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:23:18.350554    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:23:18.350568    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:23:18.450406    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:23:18.450417    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:23:18.464912    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:23:18.464921    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:23:20.979910    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:23:25.982263    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:23:25.982719    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:23:26.026886    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:23:26.027039    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:23:26.047621    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:23:26.047701    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:23:26.062799    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:23:26.062880    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:23:26.075000    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:23:26.075076    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:23:26.085848    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:23:26.085914    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:23:26.096227    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:23:26.096300    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:23:26.106162    6638 logs.go:276] 0 containers: []
	W0718 21:23:26.106174    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:23:26.106229    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:23:26.116296    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:23:26.116312    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:23:26.116316    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:23:26.128164    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:23:26.128175    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:23:26.143541    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:23:26.143549    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:23:26.154882    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:23:26.154892    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:23:26.188616    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:23:26.188626    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:23:26.200567    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:23:26.200578    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:23:26.212378    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:23:26.212387    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:23:26.230104    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:23:26.230115    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:23:26.265559    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:23:26.265569    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:23:26.280358    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:23:26.280369    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:23:26.298644    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:23:26.298656    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:23:26.310163    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:23:26.310172    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:23:26.314324    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:23:26.314330    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:23:26.326298    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:23:26.326309    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:23:26.338633    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:23:26.338644    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:23:28.862479    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:23:33.865123    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:23:33.865421    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:23:33.896688    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:23:33.896805    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:23:33.915262    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:23:33.915344    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:23:33.929390    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:23:33.929469    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:23:33.941036    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:23:33.941104    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:23:33.951752    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:23:33.951830    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:23:33.962738    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:23:33.962806    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:23:33.972768    6638 logs.go:276] 0 containers: []
	W0718 21:23:33.972781    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:23:33.972835    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:23:33.982916    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:23:33.982932    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:23:33.982937    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:23:33.994371    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:23:33.994383    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:23:34.006053    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:23:34.006064    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:23:34.017893    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:23:34.017905    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:23:34.031734    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:23:34.031746    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:23:34.043655    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:23:34.043667    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:23:34.068690    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:23:34.068696    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:23:34.103587    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:23:34.103597    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:23:34.123066    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:23:34.123076    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:23:34.138654    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:23:34.138667    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:23:34.164311    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:23:34.164320    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:23:34.175872    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:23:34.175884    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:23:34.180640    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:23:34.180648    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:23:34.191755    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:23:34.191766    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:23:34.203890    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:23:34.203902    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:23:36.740761    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:23:41.743460    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:23:41.743871    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:23:41.781310    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:23:41.781434    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:23:41.802663    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:23:41.802774    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:23:41.817494    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:23:41.817567    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:23:41.830013    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:23:41.830077    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:23:41.840665    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:23:41.840729    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:23:41.851734    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:23:41.851793    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:23:41.861975    6638 logs.go:276] 0 containers: []
	W0718 21:23:41.861987    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:23:41.862047    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:23:41.872672    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:23:41.872689    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:23:41.872695    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:23:41.887102    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:23:41.887115    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:23:41.899198    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:23:41.899208    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:23:41.910618    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:23:41.910627    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:23:41.922596    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:23:41.922610    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:23:41.947902    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:23:41.947912    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:23:41.983396    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:23:41.983404    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:23:41.998971    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:23:41.998983    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:23:42.010518    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:23:42.010529    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:23:42.014621    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:23:42.014627    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:23:42.050146    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:23:42.050164    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:23:42.064808    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:23:42.064818    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:23:42.076258    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:23:42.076269    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:23:42.087473    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:23:42.087485    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:23:42.107775    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:23:42.107786    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:23:44.626634    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:23:49.627375    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:23:49.627446    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:23:49.639388    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:23:49.639439    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:23:49.650361    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:23:49.650414    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:23:49.661670    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:23:49.661724    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:23:49.673627    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:23:49.673714    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:23:49.685237    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:23:49.685300    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:23:49.695367    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:23:49.695428    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:23:49.706197    6638 logs.go:276] 0 containers: []
	W0718 21:23:49.706212    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:23:49.706266    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:23:49.719451    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:23:49.719466    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:23:49.719473    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:23:49.724599    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:23:49.724609    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:23:49.738876    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:23:49.738888    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:23:49.752693    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:23:49.752705    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:23:49.766274    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:23:49.766287    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:23:49.779165    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:23:49.779174    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:23:49.804598    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:23:49.804615    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:23:49.841173    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:23:49.841189    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:23:49.856784    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:23:49.856795    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:23:49.868557    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:23:49.868570    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:23:49.906279    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:23:49.906290    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:23:49.918747    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:23:49.918758    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:23:49.936182    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:23:49.936189    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:23:49.948196    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:23:49.948207    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:23:49.960908    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:23:49.960919    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:23:52.482071    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:23:57.484594    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:23:57.484915    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:23:57.523261    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:23:57.523351    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:23:57.537932    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:23:57.538010    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:23:57.550796    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:23:57.550869    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:23:57.561739    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:23:57.561796    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:23:57.572198    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:23:57.572260    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:23:57.582696    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:23:57.582770    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:23:57.592574    6638 logs.go:276] 0 containers: []
	W0718 21:23:57.592587    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:23:57.592640    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:23:57.603135    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:23:57.603153    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:23:57.603158    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:23:57.620248    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:23:57.620258    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:23:57.631673    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:23:57.631684    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:23:57.665398    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:23:57.665406    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:23:57.680850    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:23:57.680860    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:23:57.716583    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:23:57.716594    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:23:57.731728    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:23:57.731737    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:23:57.755669    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:23:57.755677    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:23:57.760416    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:23:57.760424    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:23:57.774900    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:23:57.774916    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:23:57.788333    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:23:57.788345    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:23:57.801604    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:23:57.801621    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:23:57.825428    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:23:57.825447    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:23:57.839132    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:23:57.839145    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:23:57.858867    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:23:57.858885    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:24:00.374352    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:24:05.376421    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:24:05.376561    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:24:05.388599    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:24:05.388666    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:24:05.399286    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:24:05.399350    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:24:05.409785    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:24:05.409851    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:24:05.421962    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:24:05.422041    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:24:05.431998    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:24:05.432051    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:24:05.442336    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:24:05.442400    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:24:05.452565    6638 logs.go:276] 0 containers: []
	W0718 21:24:05.452577    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:24:05.452633    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:24:05.462348    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:24:05.462364    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:24:05.462369    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:24:05.497962    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:24:05.497972    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:24:05.511868    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:24:05.511877    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:24:05.523381    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:24:05.523391    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:24:05.540534    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:24:05.540546    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:24:05.544810    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:24:05.544819    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:24:05.556095    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:24:05.556106    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:24:05.571244    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:24:05.571254    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:24:05.595043    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:24:05.595049    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:24:05.609190    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:24:05.609201    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:24:05.620755    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:24:05.620768    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:24:05.632204    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:24:05.632214    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:24:05.667508    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:24:05.667518    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:24:05.681868    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:24:05.681877    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:24:05.693229    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:24:05.693239    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:24:08.205596    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:24:13.207438    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:24:13.207521    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:24:13.219712    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:24:13.219778    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:24:13.232672    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:24:13.232731    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:24:13.249887    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:24:13.249951    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:24:13.262234    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:24:13.262297    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:24:13.276319    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:24:13.276384    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:24:13.287726    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:24:13.287782    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:24:13.298800    6638 logs.go:276] 0 containers: []
	W0718 21:24:13.298816    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:24:13.298866    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:24:13.309997    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:24:13.310014    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:24:13.310020    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:24:13.326408    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:24:13.326417    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:24:13.340202    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:24:13.340212    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:24:13.357151    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:24:13.357166    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:24:13.393863    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:24:13.393877    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:24:13.413186    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:24:13.413198    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:24:13.426003    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:24:13.426017    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:24:13.438607    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:24:13.438620    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:24:13.453097    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:24:13.453110    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:24:13.465972    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:24:13.465984    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:24:13.491126    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:24:13.491144    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:24:13.495943    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:24:13.495954    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:24:13.533698    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:24:13.533709    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:24:13.551398    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:24:13.551411    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:24:13.570535    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:24:13.570547    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:24:16.092372    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:24:21.095502    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:24:21.095884    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:24:21.128929    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:24:21.129054    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:24:21.148101    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:24:21.148198    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:24:21.164503    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:24:21.164571    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:24:21.176192    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:24:21.176263    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:24:21.186884    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:24:21.186952    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:24:21.197608    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:24:21.197678    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:24:21.214970    6638 logs.go:276] 0 containers: []
	W0718 21:24:21.214981    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:24:21.215034    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:24:21.225234    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:24:21.225253    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:24:21.225257    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:24:21.242993    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:24:21.243006    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:24:21.254757    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:24:21.254767    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:24:21.266635    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:24:21.266648    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:24:21.281791    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:24:21.281802    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:24:21.293843    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:24:21.293857    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:24:21.311628    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:24:21.311637    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:24:21.334903    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:24:21.334910    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:24:21.368424    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:24:21.368431    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:24:21.380152    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:24:21.380165    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:24:21.384281    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:24:21.384290    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:24:21.396146    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:24:21.396158    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:24:21.407963    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:24:21.407972    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:24:21.423097    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:24:21.423109    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:24:21.435008    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:24:21.435020    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:24:23.973284    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:24:28.976907    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:24:28.977293    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:24:29.010002    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:24:29.010127    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:24:29.032200    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:24:29.032285    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:24:29.046169    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:24:29.046240    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:24:29.057589    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:24:29.057654    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:24:29.067959    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:24:29.068026    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:24:29.078418    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:24:29.078487    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:24:29.088540    6638 logs.go:276] 0 containers: []
	W0718 21:24:29.088552    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:24:29.088610    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:24:29.104079    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:24:29.104097    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:24:29.104102    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:24:29.140237    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:24:29.140248    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:24:29.156811    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:24:29.156824    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:24:29.189927    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:24:29.189937    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:24:29.204708    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:24:29.204721    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:24:29.220457    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:24:29.220468    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:24:29.231994    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:24:29.232011    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:24:29.243484    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:24:29.243499    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:24:29.258843    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:24:29.258851    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:24:29.282718    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:24:29.282725    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:24:29.287398    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:24:29.287405    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:24:29.298828    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:24:29.298837    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:24:29.310510    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:24:29.310520    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:24:29.327076    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:24:29.327085    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:24:29.344428    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:24:29.344438    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:24:31.858685    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:24:36.861598    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:24:36.861674    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:24:36.878085    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:24:36.878152    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:24:36.889856    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:24:36.889928    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:24:36.902132    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:24:36.902204    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:24:36.913188    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:24:36.913253    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:24:36.924775    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:24:36.924868    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:24:36.936400    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:24:36.936465    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:24:36.948714    6638 logs.go:276] 0 containers: []
	W0718 21:24:36.948727    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:24:36.948783    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:24:36.960311    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:24:36.960347    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:24:36.960359    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:24:36.996824    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:24:36.996836    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:24:37.014861    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:24:37.014873    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:24:37.020070    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:24:37.020079    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:24:37.032558    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:24:37.032569    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:24:37.045076    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:24:37.045088    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:24:37.081312    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:24:37.081328    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:24:37.097679    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:24:37.097688    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:24:37.111767    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:24:37.111777    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:24:37.128486    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:24:37.128498    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:24:37.140670    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:24:37.140679    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:24:37.152881    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:24:37.152897    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:24:37.165892    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:24:37.165902    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:24:37.190160    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:24:37.190171    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:24:37.205675    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:24:37.205686    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:24:39.722985    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:24:44.726035    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:24:44.726263    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:24:44.745827    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:24:44.745898    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:24:44.757045    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:24:44.757108    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:24:44.767440    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:24:44.767505    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:24:44.777501    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:24:44.777561    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:24:44.788006    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:24:44.788066    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:24:44.798948    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:24:44.799019    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:24:44.809143    6638 logs.go:276] 0 containers: []
	W0718 21:24:44.809152    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:24:44.809204    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:24:44.819422    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:24:44.819437    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:24:44.819442    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:24:44.834426    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:24:44.834436    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:24:44.849500    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:24:44.849510    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:24:44.884620    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:24:44.884629    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:24:44.888781    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:24:44.888789    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:24:44.903102    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:24:44.903115    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:24:44.914624    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:24:44.914636    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:24:44.932948    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:24:44.932958    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:24:44.949489    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:24:44.949500    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:24:44.960369    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:24:44.960380    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:24:44.994556    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:24:44.994570    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:24:45.008701    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:24:45.008713    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:24:45.021404    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:24:45.021418    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:24:45.038644    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:24:45.038654    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:24:45.061658    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:24:45.061668    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:24:47.573950    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:24:52.576606    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:24:52.577127    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:24:52.622342    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:24:52.622465    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:24:52.642971    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:24:52.643058    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:24:52.657098    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:24:52.657171    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:24:52.669245    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:24:52.669310    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:24:52.679602    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:24:52.679663    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:24:52.690113    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:24:52.690183    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:24:52.699875    6638 logs.go:276] 0 containers: []
	W0718 21:24:52.699887    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:24:52.699944    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:24:52.710267    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:24:52.710282    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:24:52.710287    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:24:52.726650    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:24:52.726661    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:24:52.742401    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:24:52.742413    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:24:52.747187    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:24:52.747195    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:24:52.761894    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:24:52.761903    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:24:52.773262    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:24:52.773275    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:24:52.784911    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:24:52.784923    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:24:52.803058    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:24:52.803068    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:24:52.826686    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:24:52.826693    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:24:52.860199    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:24:52.860207    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:24:52.895008    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:24:52.895019    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:24:52.907031    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:24:52.907042    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:24:52.918695    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:24:52.918707    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:24:52.934493    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:24:52.934506    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:24:52.948109    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:24:52.948123    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:24:55.461416    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:25:00.463807    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:25:00.464162    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0718 21:25:00.497440    6638 logs.go:276] 1 containers: [6ab72c2ba86a]
	I0718 21:25:00.497549    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0718 21:25:00.517906    6638 logs.go:276] 1 containers: [39d7b6581b07]
	I0718 21:25:00.517998    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0718 21:25:00.532383    6638 logs.go:276] 4 containers: [15785288fbef ba670bd8626a 790e3823af0b 27061214a5c6]
	I0718 21:25:00.532454    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0718 21:25:00.543990    6638 logs.go:276] 1 containers: [b4cacabfe076]
	I0718 21:25:00.544050    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0718 21:25:00.555180    6638 logs.go:276] 1 containers: [40a9a87951ae]
	I0718 21:25:00.555246    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0718 21:25:00.565689    6638 logs.go:276] 1 containers: [ae97c36fa434]
	I0718 21:25:00.565743    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0718 21:25:00.575953    6638 logs.go:276] 0 containers: []
	W0718 21:25:00.575965    6638 logs.go:278] No container was found matching "kindnet"
	I0718 21:25:00.576018    6638 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0718 21:25:00.586284    6638 logs.go:276] 1 containers: [646d1284ed4c]
	I0718 21:25:00.586299    6638 logs.go:123] Gathering logs for kube-apiserver [6ab72c2ba86a] ...
	I0718 21:25:00.586305    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ab72c2ba86a"
	I0718 21:25:00.600449    6638 logs.go:123] Gathering logs for kube-controller-manager [ae97c36fa434] ...
	I0718 21:25:00.600458    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae97c36fa434"
	I0718 21:25:00.617550    6638 logs.go:123] Gathering logs for describe nodes ...
	I0718 21:25:00.617558    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0718 21:25:00.654283    6638 logs.go:123] Gathering logs for coredns [15785288fbef] ...
	I0718 21:25:00.654296    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15785288fbef"
	I0718 21:25:00.677284    6638 logs.go:123] Gathering logs for kubelet ...
	I0718 21:25:00.677299    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0718 21:25:00.712394    6638 logs.go:123] Gathering logs for kube-proxy [40a9a87951ae] ...
	I0718 21:25:00.712414    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40a9a87951ae"
	I0718 21:25:00.724735    6638 logs.go:123] Gathering logs for Docker ...
	I0718 21:25:00.724747    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0718 21:25:00.748787    6638 logs.go:123] Gathering logs for container status ...
	I0718 21:25:00.748806    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0718 21:25:00.761614    6638 logs.go:123] Gathering logs for coredns [ba670bd8626a] ...
	I0718 21:25:00.761627    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba670bd8626a"
	I0718 21:25:00.774442    6638 logs.go:123] Gathering logs for etcd [39d7b6581b07] ...
	I0718 21:25:00.774454    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39d7b6581b07"
	I0718 21:25:00.789195    6638 logs.go:123] Gathering logs for coredns [790e3823af0b] ...
	I0718 21:25:00.789210    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 790e3823af0b"
	I0718 21:25:00.808681    6638 logs.go:123] Gathering logs for coredns [27061214a5c6] ...
	I0718 21:25:00.808692    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27061214a5c6"
	I0718 21:25:00.821614    6638 logs.go:123] Gathering logs for kube-scheduler [b4cacabfe076] ...
	I0718 21:25:00.821626    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4cacabfe076"
	I0718 21:25:00.838093    6638 logs.go:123] Gathering logs for storage-provisioner [646d1284ed4c] ...
	I0718 21:25:00.838106    6638 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 646d1284ed4c"
	I0718 21:25:00.850773    6638 logs.go:123] Gathering logs for dmesg ...
	I0718 21:25:00.850785    6638 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0718 21:25:03.357845    6638 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0718 21:25:08.359953    6638 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0718 21:25:08.364532    6638 out.go:177] 
	W0718 21:25:08.368611    6638 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0718 21:25:08.368624    6638 out.go:239] * 
	* 
	W0718 21:25:08.369123    6638 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:25:08.379548    6638 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-465000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.58s)

                                                
                                    
x
+
TestPause/serial/Start (10.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-508000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-508000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.972642583s)

                                                
                                                
-- stdout --
	* [pause-508000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-508000" primary control-plane node in "pause-508000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-508000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-508000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-508000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-508000 -n pause-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-508000 -n pause-508000: exit status 7 (67.621541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 : exit status 80 (9.892792042s)

                                                
                                                
-- stdout --
	* [NoKubernetes-339000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-339000" primary control-plane node in "NoKubernetes-339000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-339000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-339000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000: exit status 7 (63.976417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-339000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 : exit status 80 (5.249017208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-339000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-339000
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-339000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000: exit status 7 (64.468292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-339000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 : exit status 80 (5.23848975s)

                                                
                                                
-- stdout --
	* [NoKubernetes-339000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-339000
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-339000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000: exit status 7 (54.158709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-339000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 : exit status 80 (5.279983083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-339000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-339000
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-339000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000: exit status 7 (60.426583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-339000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0718 21:23:42.669151    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.774164375s)

                                                
                                                
-- stdout --
	* [auto-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-736000" primary control-plane node in "auto-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:23:38.483617    6902 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:23:38.483737    6902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:23:38.483740    6902 out.go:304] Setting ErrFile to fd 2...
	I0718 21:23:38.483743    6902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:23:38.483894    6902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:23:38.484970    6902 out.go:298] Setting JSON to false
	I0718 21:23:38.501449    6902 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4986,"bootTime":1721358032,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:23:38.501546    6902 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:23:38.506968    6902 out.go:177] * [auto-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:23:38.513018    6902 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:23:38.513078    6902 notify.go:220] Checking for updates...
	I0718 21:23:38.519948    6902 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:23:38.522917    6902 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:23:38.525971    6902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:23:38.528867    6902 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:23:38.531939    6902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:23:38.535284    6902 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:23:38.535352    6902 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:23:38.535405    6902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:23:38.539842    6902 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:23:38.546906    6902 start.go:297] selected driver: qemu2
	I0718 21:23:38.546913    6902 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:23:38.546926    6902 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:23:38.549353    6902 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:23:38.551889    6902 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:23:38.555076    6902 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:23:38.555106    6902 cni.go:84] Creating CNI manager for ""
	I0718 21:23:38.555114    6902 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:23:38.555119    6902 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:23:38.555150    6902 start.go:340] cluster config:
	{Name:auto-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:23:38.558855    6902 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:23:38.565925    6902 out.go:177] * Starting "auto-736000" primary control-plane node in "auto-736000" cluster
	I0718 21:23:38.569894    6902 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:23:38.569912    6902 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:23:38.569923    6902 cache.go:56] Caching tarball of preloaded images
	I0718 21:23:38.569988    6902 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:23:38.569994    6902 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:23:38.570055    6902 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/auto-736000/config.json ...
	I0718 21:23:38.570068    6902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/auto-736000/config.json: {Name:mka962653de544b84f25b1632bd339752d703cb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:23:38.570497    6902 start.go:360] acquireMachinesLock for auto-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:23:38.570532    6902 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "auto-736000"
	I0718 21:23:38.570543    6902 start.go:93] Provisioning new machine with config: &{Name:auto-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:23:38.570578    6902 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:23:38.576938    6902 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:23:38.592317    6902 start.go:159] libmachine.API.Create for "auto-736000" (driver="qemu2")
	I0718 21:23:38.592343    6902 client.go:168] LocalClient.Create starting
	I0718 21:23:38.592405    6902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:23:38.592436    6902 main.go:141] libmachine: Decoding PEM data...
	I0718 21:23:38.592445    6902 main.go:141] libmachine: Parsing certificate...
	I0718 21:23:38.592483    6902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:23:38.592507    6902 main.go:141] libmachine: Decoding PEM data...
	I0718 21:23:38.592518    6902 main.go:141] libmachine: Parsing certificate...
	I0718 21:23:38.592877    6902 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:23:38.734068    6902 main.go:141] libmachine: Creating SSH key...
	I0718 21:23:38.806354    6902 main.go:141] libmachine: Creating Disk image...
	I0718 21:23:38.806360    6902 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:23:38.806529    6902 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2
	I0718 21:23:38.815707    6902 main.go:141] libmachine: STDOUT: 
	I0718 21:23:38.815730    6902 main.go:141] libmachine: STDERR: 
	I0718 21:23:38.815781    6902 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2 +20000M
	I0718 21:23:38.823583    6902 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:23:38.823597    6902 main.go:141] libmachine: STDERR: 
	I0718 21:23:38.823612    6902 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2
	I0718 21:23:38.823616    6902 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:23:38.823627    6902 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:23:38.823661    6902 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:7d:7c:bf:4d:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2
	I0718 21:23:38.825197    6902 main.go:141] libmachine: STDOUT: 
	I0718 21:23:38.825211    6902 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:23:38.825227    6902 client.go:171] duration metric: took 232.885917ms to LocalClient.Create
	I0718 21:23:40.827384    6902 start.go:128] duration metric: took 2.25683875s to createHost
	I0718 21:23:40.827481    6902 start.go:83] releasing machines lock for "auto-736000", held for 2.257004416s
	W0718 21:23:40.827534    6902 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:23:40.838694    6902 out.go:177] * Deleting "auto-736000" in qemu2 ...
	W0718 21:23:40.869404    6902 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:23:40.869439    6902 start.go:729] Will try again in 5 seconds ...
	I0718 21:23:45.871536    6902 start.go:360] acquireMachinesLock for auto-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:23:45.872266    6902 start.go:364] duration metric: took 601.208µs to acquireMachinesLock for "auto-736000"
	I0718 21:23:45.872379    6902 start.go:93] Provisioning new machine with config: &{Name:auto-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:23:45.872584    6902 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:23:45.880993    6902 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:23:45.926217    6902 start.go:159] libmachine.API.Create for "auto-736000" (driver="qemu2")
	I0718 21:23:45.926266    6902 client.go:168] LocalClient.Create starting
	I0718 21:23:45.926377    6902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:23:45.926434    6902 main.go:141] libmachine: Decoding PEM data...
	I0718 21:23:45.926446    6902 main.go:141] libmachine: Parsing certificate...
	I0718 21:23:45.926527    6902 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:23:45.926567    6902 main.go:141] libmachine: Decoding PEM data...
	I0718 21:23:45.926576    6902 main.go:141] libmachine: Parsing certificate...
	I0718 21:23:45.927079    6902 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:23:46.075844    6902 main.go:141] libmachine: Creating SSH key...
	I0718 21:23:46.158732    6902 main.go:141] libmachine: Creating Disk image...
	I0718 21:23:46.158741    6902 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:23:46.158905    6902 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2
	I0718 21:23:46.168340    6902 main.go:141] libmachine: STDOUT: 
	I0718 21:23:46.168359    6902 main.go:141] libmachine: STDERR: 
	I0718 21:23:46.168408    6902 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2 +20000M
	I0718 21:23:46.176364    6902 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:23:46.176378    6902 main.go:141] libmachine: STDERR: 
	I0718 21:23:46.176390    6902 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2
	I0718 21:23:46.176395    6902 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:23:46.176407    6902 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:23:46.176435    6902 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:b3:4b:d4:e3:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/auto-736000/disk.qcow2
	I0718 21:23:46.178202    6902 main.go:141] libmachine: STDOUT: 
	I0718 21:23:46.178219    6902 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:23:46.178234    6902 client.go:171] duration metric: took 251.969208ms to LocalClient.Create
	I0718 21:23:48.180374    6902 start.go:128] duration metric: took 2.307799458s to createHost
	I0718 21:23:48.180486    6902 start.go:83] releasing machines lock for "auto-736000", held for 2.30826275s
	W0718 21:23:48.180950    6902 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:23:48.190792    6902 out.go:177] 
	W0718 21:23:48.204300    6902 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:23:48.204358    6902 out.go:239] * 
	* 
	W0718 21:23:48.207276    6902 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:23:48.215605    6902 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0718 21:23:59.596093    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.798374708s)

                                                
                                                
-- stdout --
	* [kindnet-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-736000" primary control-plane node in "kindnet-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:23:50.396261    7018 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:23:50.396383    7018 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:23:50.396387    7018 out.go:304] Setting ErrFile to fd 2...
	I0718 21:23:50.396389    7018 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:23:50.396512    7018 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:23:50.397595    7018 out.go:298] Setting JSON to false
	I0718 21:23:50.413878    7018 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4998,"bootTime":1721358032,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:23:50.413960    7018 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:23:50.417741    7018 out.go:177] * [kindnet-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:23:50.422153    7018 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:23:50.422268    7018 notify.go:220] Checking for updates...
	I0718 21:23:50.429695    7018 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:23:50.430833    7018 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:23:50.433679    7018 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:23:50.436696    7018 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:23:50.439724    7018 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:23:50.443088    7018 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:23:50.443156    7018 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:23:50.443227    7018 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:23:50.447680    7018 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:23:50.454607    7018 start.go:297] selected driver: qemu2
	I0718 21:23:50.454612    7018 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:23:50.454618    7018 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:23:50.456858    7018 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:23:50.459647    7018 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:23:50.462811    7018 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:23:50.462840    7018 cni.go:84] Creating CNI manager for "kindnet"
	I0718 21:23:50.462844    7018 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0718 21:23:50.462871    7018 start.go:340] cluster config:
	{Name:kindnet-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:23:50.466430    7018 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:23:50.473747    7018 out.go:177] * Starting "kindnet-736000" primary control-plane node in "kindnet-736000" cluster
	I0718 21:23:50.476643    7018 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:23:50.476656    7018 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:23:50.476666    7018 cache.go:56] Caching tarball of preloaded images
	I0718 21:23:50.476717    7018 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:23:50.476722    7018 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:23:50.476775    7018 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/kindnet-736000/config.json ...
	I0718 21:23:50.476789    7018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/kindnet-736000/config.json: {Name:mkac70756396c3d825195ab1104d7bb6a2b8e6bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:23:50.476994    7018 start.go:360] acquireMachinesLock for kindnet-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:23:50.477023    7018 start.go:364] duration metric: took 24.209µs to acquireMachinesLock for "kindnet-736000"
	I0718 21:23:50.477033    7018 start.go:93] Provisioning new machine with config: &{Name:kindnet-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:23:50.477058    7018 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:23:50.484690    7018 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:23:50.500083    7018 start.go:159] libmachine.API.Create for "kindnet-736000" (driver="qemu2")
	I0718 21:23:50.500112    7018 client.go:168] LocalClient.Create starting
	I0718 21:23:50.500177    7018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:23:50.500205    7018 main.go:141] libmachine: Decoding PEM data...
	I0718 21:23:50.500215    7018 main.go:141] libmachine: Parsing certificate...
	I0718 21:23:50.500257    7018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:23:50.500280    7018 main.go:141] libmachine: Decoding PEM data...
	I0718 21:23:50.500291    7018 main.go:141] libmachine: Parsing certificate...
	I0718 21:23:50.500688    7018 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:23:50.642571    7018 main.go:141] libmachine: Creating SSH key...
	I0718 21:23:50.693108    7018 main.go:141] libmachine: Creating Disk image...
	I0718 21:23:50.693116    7018 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:23:50.693297    7018 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2
	I0718 21:23:50.702673    7018 main.go:141] libmachine: STDOUT: 
	I0718 21:23:50.702694    7018 main.go:141] libmachine: STDERR: 
	I0718 21:23:50.702763    7018 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2 +20000M
	I0718 21:23:50.710657    7018 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:23:50.710672    7018 main.go:141] libmachine: STDERR: 
	I0718 21:23:50.710692    7018 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2
	I0718 21:23:50.710697    7018 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:23:50.710709    7018 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:23:50.710737    7018 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ae:7c:43:df:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2
	I0718 21:23:50.712365    7018 main.go:141] libmachine: STDOUT: 
	I0718 21:23:50.712379    7018 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:23:50.712404    7018 client.go:171] duration metric: took 212.294083ms to LocalClient.Create
	I0718 21:23:52.714465    7018 start.go:128] duration metric: took 2.237456125s to createHost
	I0718 21:23:52.714500    7018 start.go:83] releasing machines lock for "kindnet-736000", held for 2.237534375s
	W0718 21:23:52.714545    7018 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:23:52.720577    7018 out.go:177] * Deleting "kindnet-736000" in qemu2 ...
	W0718 21:23:52.741814    7018 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:23:52.741832    7018 start.go:729] Will try again in 5 seconds ...
	I0718 21:23:57.743773    7018 start.go:360] acquireMachinesLock for kindnet-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:23:57.743880    7018 start.go:364] duration metric: took 83.583µs to acquireMachinesLock for "kindnet-736000"
	I0718 21:23:57.743900    7018 start.go:93] Provisioning new machine with config: &{Name:kindnet-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:23:57.743932    7018 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:23:57.752064    7018 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:23:57.768386    7018 start.go:159] libmachine.API.Create for "kindnet-736000" (driver="qemu2")
	I0718 21:23:57.768417    7018 client.go:168] LocalClient.Create starting
	I0718 21:23:57.768509    7018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:23:57.768551    7018 main.go:141] libmachine: Decoding PEM data...
	I0718 21:23:57.768562    7018 main.go:141] libmachine: Parsing certificate...
	I0718 21:23:57.768604    7018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:23:57.768631    7018 main.go:141] libmachine: Decoding PEM data...
	I0718 21:23:57.768639    7018 main.go:141] libmachine: Parsing certificate...
	I0718 21:23:57.768954    7018 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:23:57.912746    7018 main.go:141] libmachine: Creating SSH key...
	I0718 21:23:58.095232    7018 main.go:141] libmachine: Creating Disk image...
	I0718 21:23:58.095240    7018 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:23:58.095457    7018 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2
	I0718 21:23:58.105349    7018 main.go:141] libmachine: STDOUT: 
	I0718 21:23:58.105370    7018 main.go:141] libmachine: STDERR: 
	I0718 21:23:58.105430    7018 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2 +20000M
	I0718 21:23:58.113577    7018 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:23:58.113596    7018 main.go:141] libmachine: STDERR: 
	I0718 21:23:58.113610    7018 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2
	I0718 21:23:58.113616    7018 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:23:58.113626    7018 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:23:58.113650    7018 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:42:6f:0c:ab:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kindnet-736000/disk.qcow2
	I0718 21:23:58.115356    7018 main.go:141] libmachine: STDOUT: 
	I0718 21:23:58.115374    7018 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:23:58.115386    7018 client.go:171] duration metric: took 346.975125ms to LocalClient.Create
	I0718 21:24:00.117530    7018 start.go:128] duration metric: took 2.37364075s to createHost
	I0718 21:24:00.117637    7018 start.go:83] releasing machines lock for "kindnet-736000", held for 2.373761958s
	W0718 21:24:00.117995    7018 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:00.131417    7018 out.go:177] 
	W0718 21:24:00.134607    7018 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:24:00.134642    7018 out.go:239] * 
	* 
	W0718 21:24:00.137595    7018 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:24:00.151552    7018 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.081976291s)

                                                
                                                
-- stdout --
	* [calico-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-736000" primary control-plane node in "calico-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:24:02.410053    7135 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:24:02.410181    7135 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:24:02.410184    7135 out.go:304] Setting ErrFile to fd 2...
	I0718 21:24:02.410186    7135 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:24:02.410326    7135 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:24:02.411512    7135 out.go:298] Setting JSON to false
	I0718 21:24:02.428422    7135 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5010,"bootTime":1721358032,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:24:02.428498    7135 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:24:02.434667    7135 out.go:177] * [calico-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:24:02.442645    7135 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:24:02.442763    7135 notify.go:220] Checking for updates...
	I0718 21:24:02.449606    7135 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:24:02.452586    7135 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:24:02.453873    7135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:24:02.456633    7135 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:24:02.459637    7135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:24:02.463003    7135 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:24:02.463063    7135 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:24:02.463108    7135 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:24:02.467562    7135 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:24:02.474652    7135 start.go:297] selected driver: qemu2
	I0718 21:24:02.474660    7135 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:24:02.474668    7135 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:24:02.476806    7135 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:24:02.479586    7135 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:24:02.482695    7135 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:24:02.482722    7135 cni.go:84] Creating CNI manager for "calico"
	I0718 21:24:02.482730    7135 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0718 21:24:02.482760    7135 start.go:340] cluster config:
	{Name:calico-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:24:02.486141    7135 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:24:02.493571    7135 out.go:177] * Starting "calico-736000" primary control-plane node in "calico-736000" cluster
	I0718 21:24:02.497640    7135 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:24:02.497654    7135 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:24:02.497664    7135 cache.go:56] Caching tarball of preloaded images
	I0718 21:24:02.497715    7135 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:24:02.497722    7135 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:24:02.497789    7135 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/calico-736000/config.json ...
	I0718 21:24:02.497805    7135 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/calico-736000/config.json: {Name:mk783c3fd0d804290dd70dc7be5fb63d6a19fc24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:24:02.498033    7135 start.go:360] acquireMachinesLock for calico-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:24:02.498062    7135 start.go:364] duration metric: took 23.834µs to acquireMachinesLock for "calico-736000"
	I0718 21:24:02.498071    7135 start.go:93] Provisioning new machine with config: &{Name:calico-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:24:02.498097    7135 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:24:02.506627    7135 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:24:02.521583    7135 start.go:159] libmachine.API.Create for "calico-736000" (driver="qemu2")
	I0718 21:24:02.521606    7135 client.go:168] LocalClient.Create starting
	I0718 21:24:02.521665    7135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:24:02.521693    7135 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:02.521703    7135 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:02.521742    7135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:24:02.521764    7135 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:02.521775    7135 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:02.522137    7135 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:24:02.663488    7135 main.go:141] libmachine: Creating SSH key...
	I0718 21:24:02.857188    7135 main.go:141] libmachine: Creating Disk image...
	I0718 21:24:02.857198    7135 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:24:02.857386    7135 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2
	I0718 21:24:02.866858    7135 main.go:141] libmachine: STDOUT: 
	I0718 21:24:02.866884    7135 main.go:141] libmachine: STDERR: 
	I0718 21:24:02.866933    7135 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2 +20000M
	I0718 21:24:02.874921    7135 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:24:02.874936    7135 main.go:141] libmachine: STDERR: 
	I0718 21:24:02.874958    7135 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2
	I0718 21:24:02.874962    7135 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:24:02.874973    7135 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:24:02.875000    7135 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9f:a7:92:b0:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2
	I0718 21:24:02.876613    7135 main.go:141] libmachine: STDOUT: 
	I0718 21:24:02.876627    7135 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:24:02.876649    7135 client.go:171] duration metric: took 355.049958ms to LocalClient.Create
	I0718 21:24:04.878874    7135 start.go:128] duration metric: took 2.380814875s to createHost
	I0718 21:24:04.878950    7135 start.go:83] releasing machines lock for "calico-736000", held for 2.380949291s
	W0718 21:24:04.879009    7135 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:04.891678    7135 out.go:177] * Deleting "calico-736000" in qemu2 ...
	W0718 21:24:04.916282    7135 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:04.916309    7135 start.go:729] Will try again in 5 seconds ...
	I0718 21:24:09.917140    7135 start.go:360] acquireMachinesLock for calico-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:24:09.917749    7135 start.go:364] duration metric: took 466.917µs to acquireMachinesLock for "calico-736000"
	I0718 21:24:09.917898    7135 start.go:93] Provisioning new machine with config: &{Name:calico-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:24:09.918206    7135 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:24:09.927806    7135 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:24:09.975845    7135 start.go:159] libmachine.API.Create for "calico-736000" (driver="qemu2")
	I0718 21:24:09.975898    7135 client.go:168] LocalClient.Create starting
	I0718 21:24:09.976021    7135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:24:09.976092    7135 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:09.976108    7135 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:09.976177    7135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:24:09.976221    7135 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:09.976239    7135 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:09.976870    7135 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:24:10.127079    7135 main.go:141] libmachine: Creating SSH key...
	I0718 21:24:10.408123    7135 main.go:141] libmachine: Creating Disk image...
	I0718 21:24:10.408135    7135 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:24:10.408336    7135 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2
	I0718 21:24:10.418295    7135 main.go:141] libmachine: STDOUT: 
	I0718 21:24:10.418326    7135 main.go:141] libmachine: STDERR: 
	I0718 21:24:10.418403    7135 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2 +20000M
	I0718 21:24:10.427229    7135 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:24:10.427245    7135 main.go:141] libmachine: STDERR: 
	I0718 21:24:10.427262    7135 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2
	I0718 21:24:10.427269    7135 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:24:10.427290    7135 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:24:10.427323    7135 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:09:be:c1:b4:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/calico-736000/disk.qcow2
	I0718 21:24:10.429091    7135 main.go:141] libmachine: STDOUT: 
	I0718 21:24:10.429107    7135 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:24:10.429121    7135 client.go:171] duration metric: took 453.230292ms to LocalClient.Create
	I0718 21:24:12.431982    7135 start.go:128] duration metric: took 2.51303675s to createHost
	I0718 21:24:12.432016    7135 start.go:83] releasing machines lock for "calico-736000", held for 2.513517833s
	W0718 21:24:12.432209    7135 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:12.442979    7135 out.go:177] 
	W0718 21:24:12.446993    7135 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:24:12.447006    7135 out.go:239] * 
	* 
	W0718 21:24:12.448196    7135 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:24:12.455978    7135 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.850171333s)

                                                
                                                
-- stdout --
	* [custom-flannel-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-736000" primary control-plane node in "custom-flannel-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:24:14.849951    7256 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:24:14.850108    7256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:24:14.850116    7256 out.go:304] Setting ErrFile to fd 2...
	I0718 21:24:14.850118    7256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:24:14.850256    7256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:24:14.851361    7256 out.go:298] Setting JSON to false
	I0718 21:24:14.867727    7256 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5022,"bootTime":1721358032,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:24:14.867839    7256 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:24:14.873372    7256 out.go:177] * [custom-flannel-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:24:14.879350    7256 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:24:14.879426    7256 notify.go:220] Checking for updates...
	I0718 21:24:14.887360    7256 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:24:14.890315    7256 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:24:14.893402    7256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:24:14.896381    7256 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:24:14.899328    7256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:24:14.902669    7256 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:24:14.902732    7256 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:24:14.902782    7256 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:24:14.909385    7256 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:24:14.917318    7256 start.go:297] selected driver: qemu2
	I0718 21:24:14.917324    7256 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:24:14.917331    7256 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:24:14.919602    7256 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:24:14.922373    7256 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:24:14.925417    7256 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:24:14.925467    7256 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0718 21:24:14.925478    7256 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0718 21:24:14.925513    7256 start.go:340] cluster config:
	{Name:custom-flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:24:14.929143    7256 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:24:14.939363    7256 out.go:177] * Starting "custom-flannel-736000" primary control-plane node in "custom-flannel-736000" cluster
	I0718 21:24:14.943308    7256 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:24:14.943326    7256 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:24:14.943339    7256 cache.go:56] Caching tarball of preloaded images
	I0718 21:24:14.943399    7256 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:24:14.943405    7256 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:24:14.943466    7256 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/custom-flannel-736000/config.json ...
	I0718 21:24:14.943478    7256 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/custom-flannel-736000/config.json: {Name:mkf1955a3483d7dd0e814de0ff4612d4ac67c607 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:24:14.943797    7256 start.go:360] acquireMachinesLock for custom-flannel-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:24:14.943829    7256 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "custom-flannel-736000"
	I0718 21:24:14.943839    7256 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:24:14.943872    7256 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:24:14.952328    7256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:24:14.968194    7256 start.go:159] libmachine.API.Create for "custom-flannel-736000" (driver="qemu2")
	I0718 21:24:14.968236    7256 client.go:168] LocalClient.Create starting
	I0718 21:24:14.968300    7256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:24:14.968332    7256 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:14.968350    7256 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:14.968391    7256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:24:14.968413    7256 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:14.968420    7256 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:14.968768    7256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:24:15.111840    7256 main.go:141] libmachine: Creating SSH key...
	I0718 21:24:15.272659    7256 main.go:141] libmachine: Creating Disk image...
	I0718 21:24:15.272671    7256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:24:15.272864    7256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2
	I0718 21:24:15.282824    7256 main.go:141] libmachine: STDOUT: 
	I0718 21:24:15.282845    7256 main.go:141] libmachine: STDERR: 
	I0718 21:24:15.282900    7256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2 +20000M
	I0718 21:24:15.290866    7256 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:24:15.290882    7256 main.go:141] libmachine: STDERR: 
	I0718 21:24:15.290897    7256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2
	I0718 21:24:15.290903    7256 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:24:15.290917    7256 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:24:15.290948    7256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:8d:03:a7:1b:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2
	I0718 21:24:15.292590    7256 main.go:141] libmachine: STDOUT: 
	I0718 21:24:15.292604    7256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:24:15.292621    7256 client.go:171] duration metric: took 324.27325ms to LocalClient.Create
	I0718 21:24:17.295449    7256 start.go:128] duration metric: took 2.350805084s to createHost
	I0718 21:24:17.295560    7256 start.go:83] releasing machines lock for "custom-flannel-736000", held for 2.350979416s
	W0718 21:24:17.295609    7256 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:17.302879    7256 out.go:177] * Deleting "custom-flannel-736000" in qemu2 ...
	W0718 21:24:17.328361    7256 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:17.328406    7256 start.go:729] Will try again in 5 seconds ...
	I0718 21:24:22.331949    7256 start.go:360] acquireMachinesLock for custom-flannel-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:24:22.332638    7256 start.go:364] duration metric: took 492.25µs to acquireMachinesLock for "custom-flannel-736000"
	I0718 21:24:22.332802    7256 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:24:22.333006    7256 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:24:22.347642    7256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:24:22.388130    7256 start.go:159] libmachine.API.Create for "custom-flannel-736000" (driver="qemu2")
	I0718 21:24:22.388183    7256 client.go:168] LocalClient.Create starting
	I0718 21:24:22.388300    7256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:24:22.388360    7256 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:22.388373    7256 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:22.388442    7256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:24:22.388483    7256 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:22.388492    7256 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:22.388976    7256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:24:22.539073    7256 main.go:141] libmachine: Creating SSH key...
	I0718 21:24:22.608143    7256 main.go:141] libmachine: Creating Disk image...
	I0718 21:24:22.608153    7256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:24:22.608336    7256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2
	I0718 21:24:22.618510    7256 main.go:141] libmachine: STDOUT: 
	I0718 21:24:22.618529    7256 main.go:141] libmachine: STDERR: 
	I0718 21:24:22.618593    7256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2 +20000M
	I0718 21:24:22.626516    7256 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:24:22.626530    7256 main.go:141] libmachine: STDERR: 
	I0718 21:24:22.626540    7256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2
	I0718 21:24:22.626545    7256 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:24:22.626556    7256 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:24:22.626580    7256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:2a:8b:39:00:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/custom-flannel-736000/disk.qcow2
	I0718 21:24:22.628229    7256 main.go:141] libmachine: STDOUT: 
	I0718 21:24:22.628243    7256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:24:22.628256    7256 client.go:171] duration metric: took 240.019708ms to LocalClient.Create
	I0718 21:24:24.630850    7256 start.go:128] duration metric: took 2.297370041s to createHost
	I0718 21:24:24.630966    7256 start.go:83] releasing machines lock for "custom-flannel-736000", held for 2.297865167s
	W0718 21:24:24.631448    7256 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:24.647098    7256 out.go:177] 
	W0718 21:24:24.651295    7256 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:24:24.651323    7256 out.go:239] * 
	* 
	W0718 21:24:24.653999    7256 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:24:24.662115    7256 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.710923166s)

                                                
                                                
-- stdout --
	* [false-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-736000" primary control-plane node in "false-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:24:27.038666    7376 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:24:27.038789    7376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:24:27.038791    7376 out.go:304] Setting ErrFile to fd 2...
	I0718 21:24:27.038794    7376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:24:27.038905    7376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:24:27.039977    7376 out.go:298] Setting JSON to false
	I0718 21:24:27.056261    7376 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5035,"bootTime":1721358032,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:24:27.056337    7376 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:24:27.062515    7376 out.go:177] * [false-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:24:27.070405    7376 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:24:27.070454    7376 notify.go:220] Checking for updates...
	I0718 21:24:27.077369    7376 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:24:27.080324    7376 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:24:27.083397    7376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:24:27.090355    7376 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:24:27.094338    7376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:24:27.097850    7376 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:24:27.097920    7376 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:24:27.097964    7376 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:24:27.102344    7376 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:24:27.109379    7376 start.go:297] selected driver: qemu2
	I0718 21:24:27.109387    7376 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:24:27.109394    7376 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:24:27.111852    7376 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:24:27.115456    7376 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:24:27.118489    7376 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:24:27.118533    7376 cni.go:84] Creating CNI manager for "false"
	I0718 21:24:27.118559    7376 start.go:340] cluster config:
	{Name:false-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:24:27.122347    7376 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:24:27.129372    7376 out.go:177] * Starting "false-736000" primary control-plane node in "false-736000" cluster
	I0718 21:24:27.133375    7376 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:24:27.133391    7376 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:24:27.133406    7376 cache.go:56] Caching tarball of preloaded images
	I0718 21:24:27.133472    7376 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:24:27.133478    7376 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:24:27.133558    7376 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/false-736000/config.json ...
	I0718 21:24:27.133577    7376 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/false-736000/config.json: {Name:mk671e5fc16da7a8b3804a2d23e30fb3f74f9569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:24:27.133792    7376 start.go:360] acquireMachinesLock for false-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:24:27.133827    7376 start.go:364] duration metric: took 29.334µs to acquireMachinesLock for "false-736000"
	I0718 21:24:27.133838    7376 start.go:93] Provisioning new machine with config: &{Name:false-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:24:27.133867    7376 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:24:27.138401    7376 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:24:27.155973    7376 start.go:159] libmachine.API.Create for "false-736000" (driver="qemu2")
	I0718 21:24:27.156007    7376 client.go:168] LocalClient.Create starting
	I0718 21:24:27.156062    7376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:24:27.156093    7376 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:27.156104    7376 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:27.156149    7376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:24:27.156173    7376 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:27.156182    7376 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:27.156526    7376 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:24:27.299994    7376 main.go:141] libmachine: Creating SSH key...
	I0718 21:24:27.366129    7376 main.go:141] libmachine: Creating Disk image...
	I0718 21:24:27.366139    7376 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:24:27.366338    7376 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2
	I0718 21:24:27.375995    7376 main.go:141] libmachine: STDOUT: 
	I0718 21:24:27.376010    7376 main.go:141] libmachine: STDERR: 
	I0718 21:24:27.376062    7376 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2 +20000M
	I0718 21:24:27.384144    7376 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:24:27.384162    7376 main.go:141] libmachine: STDERR: 
	I0718 21:24:27.384175    7376 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2
	I0718 21:24:27.384180    7376 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:24:27.384203    7376 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:24:27.384228    7376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:d5:7d:ce:3d:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2
	I0718 21:24:27.385930    7376 main.go:141] libmachine: STDOUT: 
	I0718 21:24:27.385944    7376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:24:27.385967    7376 client.go:171] duration metric: took 229.922917ms to LocalClient.Create
	I0718 21:24:29.388395    7376 start.go:128] duration metric: took 2.254211375s to createHost
	I0718 21:24:29.388423    7376 start.go:83] releasing machines lock for "false-736000", held for 2.254300166s
	W0718 21:24:29.388445    7376 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:29.400977    7376 out.go:177] * Deleting "false-736000" in qemu2 ...
	W0718 21:24:29.410076    7376 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:29.410085    7376 start.go:729] Will try again in 5 seconds ...
	I0718 21:24:34.412790    7376 start.go:360] acquireMachinesLock for false-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:24:34.413391    7376 start.go:364] duration metric: took 458.375µs to acquireMachinesLock for "false-736000"
	I0718 21:24:34.413606    7376 start.go:93] Provisioning new machine with config: &{Name:false-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:24:34.413874    7376 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:24:34.424631    7376 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:24:34.469112    7376 start.go:159] libmachine.API.Create for "false-736000" (driver="qemu2")
	I0718 21:24:34.469160    7376 client.go:168] LocalClient.Create starting
	I0718 21:24:34.469324    7376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:24:34.469399    7376 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:34.469416    7376 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:34.469479    7376 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:24:34.469520    7376 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:34.469534    7376 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:34.470012    7376 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:24:34.617905    7376 main.go:141] libmachine: Creating SSH key...
	I0718 21:24:34.661132    7376 main.go:141] libmachine: Creating Disk image...
	I0718 21:24:34.661141    7376 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:24:34.661337    7376 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2
	I0718 21:24:34.671067    7376 main.go:141] libmachine: STDOUT: 
	I0718 21:24:34.671091    7376 main.go:141] libmachine: STDERR: 
	I0718 21:24:34.671160    7376 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2 +20000M
	I0718 21:24:34.679400    7376 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:24:34.679425    7376 main.go:141] libmachine: STDERR: 
	I0718 21:24:34.679437    7376 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2
	I0718 21:24:34.679441    7376 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:24:34.679453    7376 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:24:34.679478    7376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:1c:dd:91:6a:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/false-736000/disk.qcow2
	I0718 21:24:34.681167    7376 main.go:141] libmachine: STDOUT: 
	I0718 21:24:34.681183    7376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:24:34.681195    7376 client.go:171] duration metric: took 212.010792ms to LocalClient.Create
	I0718 21:24:36.683564    7376 start.go:128] duration metric: took 2.269445208s to createHost
	I0718 21:24:36.683683    7376 start.go:83] releasing machines lock for "false-736000", held for 2.270104458s
	W0718 21:24:36.684148    7376 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:36.694737    7376 out.go:177] 
	W0718 21:24:36.698847    7376 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:24:36.698894    7376 out.go:239] * 
	* 
	W0718 21:24:36.701434    7376 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:24:36.708771    7376 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.691346584s)

                                                
                                                
-- stdout --
	* [enable-default-cni-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-736000" primary control-plane node in "enable-default-cni-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:24:38.930210    7485 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:24:38.930352    7485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:24:38.930355    7485 out.go:304] Setting ErrFile to fd 2...
	I0718 21:24:38.930357    7485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:24:38.930499    7485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:24:38.931557    7485 out.go:298] Setting JSON to false
	I0718 21:24:38.948027    7485 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5046,"bootTime":1721358032,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:24:38.948109    7485 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:24:38.953678    7485 out.go:177] * [enable-default-cni-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:24:38.960601    7485 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:24:38.960616    7485 notify.go:220] Checking for updates...
	I0718 21:24:38.967719    7485 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:24:38.970717    7485 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:24:38.973734    7485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:24:38.976724    7485 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:24:38.979588    7485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:24:38.982935    7485 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:24:38.983003    7485 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:24:38.983059    7485 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:24:38.986721    7485 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:24:38.993677    7485 start.go:297] selected driver: qemu2
	I0718 21:24:38.993683    7485 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:24:38.993689    7485 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:24:38.995805    7485 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:24:38.998674    7485 out.go:177] * Automatically selected the socket_vmnet network
	E0718 21:24:39.001637    7485 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0718 21:24:39.001648    7485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:24:39.001674    7485 cni.go:84] Creating CNI manager for "bridge"
	I0718 21:24:39.001677    7485 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:24:39.001700    7485 start.go:340] cluster config:
	{Name:enable-default-cni-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:24:39.005060    7485 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:24:39.011655    7485 out.go:177] * Starting "enable-default-cni-736000" primary control-plane node in "enable-default-cni-736000" cluster
	I0718 21:24:39.015668    7485 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:24:39.015682    7485 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:24:39.015689    7485 cache.go:56] Caching tarball of preloaded images
	I0718 21:24:39.015741    7485 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:24:39.015746    7485 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:24:39.015794    7485 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/enable-default-cni-736000/config.json ...
	I0718 21:24:39.015806    7485 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/enable-default-cni-736000/config.json: {Name:mkfdad4413d843ea9bc79e4ca33c0cf052edd80b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:24:39.016007    7485 start.go:360] acquireMachinesLock for enable-default-cni-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:24:39.016042    7485 start.go:364] duration metric: took 25.083µs to acquireMachinesLock for "enable-default-cni-736000"
	I0718 21:24:39.016053    7485 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:24:39.016078    7485 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:24:39.024677    7485 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:24:39.039832    7485 start.go:159] libmachine.API.Create for "enable-default-cni-736000" (driver="qemu2")
	I0718 21:24:39.039861    7485 client.go:168] LocalClient.Create starting
	I0718 21:24:39.039915    7485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:24:39.039947    7485 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:39.039956    7485 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:39.039994    7485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:24:39.040019    7485 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:39.040026    7485 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:39.040369    7485 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:24:39.182223    7485 main.go:141] libmachine: Creating SSH key...
	I0718 21:24:39.219635    7485 main.go:141] libmachine: Creating Disk image...
	I0718 21:24:39.219642    7485 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:24:39.219817    7485 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I0718 21:24:39.229044    7485 main.go:141] libmachine: STDOUT: 
	I0718 21:24:39.229060    7485 main.go:141] libmachine: STDERR: 
	I0718 21:24:39.229108    7485 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2 +20000M
	I0718 21:24:39.237009    7485 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:24:39.237026    7485 main.go:141] libmachine: STDERR: 
	I0718 21:24:39.237037    7485 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I0718 21:24:39.237043    7485 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:24:39.237060    7485 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:24:39.237088    7485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e0:63:2e:56:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I0718 21:24:39.238711    7485 main.go:141] libmachine: STDOUT: 
	I0718 21:24:39.238728    7485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:24:39.238745    7485 client.go:171] duration metric: took 198.869125ms to LocalClient.Create
	I0718 21:24:41.241052    7485 start.go:128] duration metric: took 2.224840708s to createHost
	I0718 21:24:41.241142    7485 start.go:83] releasing machines lock for "enable-default-cni-736000", held for 2.224988042s
	W0718 21:24:41.241192    7485 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:41.255070    7485 out.go:177] * Deleting "enable-default-cni-736000" in qemu2 ...
	W0718 21:24:41.282887    7485 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:41.282919    7485 start.go:729] Will try again in 5 seconds ...
	I0718 21:24:46.285251    7485 start.go:360] acquireMachinesLock for enable-default-cni-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:24:46.285859    7485 start.go:364] duration metric: took 522.166µs to acquireMachinesLock for "enable-default-cni-736000"
	I0718 21:24:46.285933    7485 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:24:46.286189    7485 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:24:46.291756    7485 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:24:46.338169    7485 start.go:159] libmachine.API.Create for "enable-default-cni-736000" (driver="qemu2")
	I0718 21:24:46.338228    7485 client.go:168] LocalClient.Create starting
	I0718 21:24:46.338347    7485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:24:46.338414    7485 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:46.338430    7485 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:46.338503    7485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:24:46.338551    7485 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:46.338568    7485 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:46.339130    7485 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:24:46.487501    7485 main.go:141] libmachine: Creating SSH key...
	I0718 21:24:46.530626    7485 main.go:141] libmachine: Creating Disk image...
	I0718 21:24:46.530638    7485 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:24:46.530803    7485 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I0718 21:24:46.540019    7485 main.go:141] libmachine: STDOUT: 
	I0718 21:24:46.540048    7485 main.go:141] libmachine: STDERR: 
	I0718 21:24:46.540105    7485 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2 +20000M
	I0718 21:24:46.548066    7485 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:24:46.548089    7485 main.go:141] libmachine: STDERR: 
	I0718 21:24:46.548107    7485 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I0718 21:24:46.548111    7485 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:24:46.548122    7485 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:24:46.548154    7485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:b5:b7:d3:90:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I0718 21:24:46.549768    7485 main.go:141] libmachine: STDOUT: 
	I0718 21:24:46.549782    7485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:24:46.549794    7485 client.go:171] duration metric: took 211.557208ms to LocalClient.Create
	I0718 21:24:48.552040    7485 start.go:128] duration metric: took 2.265765208s to createHost
	I0718 21:24:48.552122    7485 start.go:83] releasing machines lock for "enable-default-cni-736000", held for 2.266195708s
	W0718 21:24:48.552476    7485 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:48.566344    7485 out.go:177] 
	W0718 21:24:48.570354    7485 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:24:48.570391    7485 out.go:239] * 
	* 
	W0718 21:24:48.572946    7485 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:24:48.582283    7485 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.897137333s)

                                                
                                                
-- stdout --
	* [flannel-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-736000" primary control-plane node in "flannel-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:24:50.775039    7597 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:24:50.775188    7597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:24:50.775194    7597 out.go:304] Setting ErrFile to fd 2...
	I0718 21:24:50.775196    7597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:24:50.775329    7597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:24:50.776468    7597 out.go:298] Setting JSON to false
	I0718 21:24:50.792598    7597 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5058,"bootTime":1721358032,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:24:50.792670    7597 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:24:50.799465    7597 out.go:177] * [flannel-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:24:50.807409    7597 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:24:50.807468    7597 notify.go:220] Checking for updates...
	I0718 21:24:50.814447    7597 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:24:50.817418    7597 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:24:50.820454    7597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:24:50.823454    7597 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:24:50.826408    7597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:24:50.829874    7597 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:24:50.829950    7597 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:24:50.829997    7597 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:24:50.834413    7597 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:24:50.841405    7597 start.go:297] selected driver: qemu2
	I0718 21:24:50.841410    7597 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:24:50.841415    7597 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:24:50.843591    7597 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:24:50.846424    7597 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:24:50.849465    7597 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:24:50.849484    7597 cni.go:84] Creating CNI manager for "flannel"
	I0718 21:24:50.849489    7597 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0718 21:24:50.849520    7597 start.go:340] cluster config:
	{Name:flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:24:50.853116    7597 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:24:50.856531    7597 out.go:177] * Starting "flannel-736000" primary control-plane node in "flannel-736000" cluster
	I0718 21:24:50.863410    7597 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:24:50.863426    7597 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:24:50.863438    7597 cache.go:56] Caching tarball of preloaded images
	I0718 21:24:50.863505    7597 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:24:50.863510    7597 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:24:50.863571    7597 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/flannel-736000/config.json ...
	I0718 21:24:50.863587    7597 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/flannel-736000/config.json: {Name:mk7e7ade08a64dec6ee31d40b0cde21c096bd81d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:24:50.863785    7597 start.go:360] acquireMachinesLock for flannel-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:24:50.863815    7597 start.go:364] duration metric: took 24.833µs to acquireMachinesLock for "flannel-736000"
	I0718 21:24:50.863828    7597 start.go:93] Provisioning new machine with config: &{Name:flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:24:50.863859    7597 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:24:50.870416    7597 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:24:50.886143    7597 start.go:159] libmachine.API.Create for "flannel-736000" (driver="qemu2")
	I0718 21:24:50.886166    7597 client.go:168] LocalClient.Create starting
	I0718 21:24:50.886230    7597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:24:50.886259    7597 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:50.886267    7597 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:50.886304    7597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:24:50.886326    7597 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:50.886336    7597 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:50.886658    7597 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:24:51.028366    7597 main.go:141] libmachine: Creating SSH key...
	I0718 21:24:51.237147    7597 main.go:141] libmachine: Creating Disk image...
	I0718 21:24:51.237167    7597 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:24:51.237371    7597 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2
	I0718 21:24:51.247325    7597 main.go:141] libmachine: STDOUT: 
	I0718 21:24:51.247343    7597 main.go:141] libmachine: STDERR: 
	I0718 21:24:51.247394    7597 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2 +20000M
	I0718 21:24:51.255695    7597 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:24:51.255713    7597 main.go:141] libmachine: STDERR: 
	I0718 21:24:51.255736    7597 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2
	I0718 21:24:51.255743    7597 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:24:51.255759    7597 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:24:51.255788    7597 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:f3:c4:3b:84:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2
	I0718 21:24:51.257552    7597 main.go:141] libmachine: STDOUT: 
	I0718 21:24:51.257567    7597 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:24:51.257584    7597 client.go:171] duration metric: took 371.412375ms to LocalClient.Create
	I0718 21:24:53.259767    7597 start.go:128] duration metric: took 2.395879167s to createHost
	I0718 21:24:53.259795    7597 start.go:83] releasing machines lock for "flannel-736000", held for 2.395957084s
	W0718 21:24:53.259834    7597 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:53.265505    7597 out.go:177] * Deleting "flannel-736000" in qemu2 ...
	W0718 21:24:53.283117    7597 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:24:53.283131    7597 start.go:729] Will try again in 5 seconds ...
	I0718 21:24:58.284316    7597 start.go:360] acquireMachinesLock for flannel-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:24:58.284582    7597 start.go:364] duration metric: took 217.125µs to acquireMachinesLock for "flannel-736000"
	I0718 21:24:58.284648    7597 start.go:93] Provisioning new machine with config: &{Name:flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:24:58.284785    7597 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:24:58.294392    7597 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:24:58.328354    7597 start.go:159] libmachine.API.Create for "flannel-736000" (driver="qemu2")
	I0718 21:24:58.328398    7597 client.go:168] LocalClient.Create starting
	I0718 21:24:58.328509    7597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:24:58.328568    7597 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:58.328585    7597 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:58.328638    7597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:24:58.328675    7597 main.go:141] libmachine: Decoding PEM data...
	I0718 21:24:58.328693    7597 main.go:141] libmachine: Parsing certificate...
	I0718 21:24:58.329149    7597 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:24:58.474004    7597 main.go:141] libmachine: Creating SSH key...
	I0718 21:24:58.593762    7597 main.go:141] libmachine: Creating Disk image...
	I0718 21:24:58.593773    7597 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:24:58.593941    7597 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2
	I0718 21:24:58.602998    7597 main.go:141] libmachine: STDOUT: 
	I0718 21:24:58.603018    7597 main.go:141] libmachine: STDERR: 
	I0718 21:24:58.603076    7597 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2 +20000M
	I0718 21:24:58.611047    7597 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:24:58.611064    7597 main.go:141] libmachine: STDERR: 
	I0718 21:24:58.611098    7597 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2
	I0718 21:24:58.611107    7597 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:24:58.611117    7597 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:24:58.611152    7597 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:06:d7:f8:32:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/flannel-736000/disk.qcow2
	I0718 21:24:58.612736    7597 main.go:141] libmachine: STDOUT: 
	I0718 21:24:58.612750    7597 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:24:58.612764    7597 client.go:171] duration metric: took 284.363166ms to LocalClient.Create
	I0718 21:25:00.614037    7597 start.go:128] duration metric: took 2.329251125s to createHost
	I0718 21:25:00.614047    7597 start.go:83] releasing machines lock for "flannel-736000", held for 2.32946725s
	W0718 21:25:00.614137    7597 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:00.620357    7597 out.go:177] 
	W0718 21:25:00.625371    7597 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:25:00.625378    7597 out.go:239] * 
	* 
	W0718 21:25:00.625874    7597 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:25:00.637333    7597 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.795364875s)

                                                
                                                
-- stdout --
	* [bridge-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-736000" primary control-plane node in "bridge-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:25:02.991954    7719 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:25:02.992440    7719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:02.992668    7719 out.go:304] Setting ErrFile to fd 2...
	I0718 21:25:02.992673    7719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:02.993199    7719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:25:02.994541    7719 out.go:298] Setting JSON to false
	I0718 21:25:03.011411    7719 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5071,"bootTime":1721358032,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:25:03.011480    7719 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:25:03.018208    7719 out.go:177] * [bridge-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:25:03.026186    7719 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:25:03.026214    7719 notify.go:220] Checking for updates...
	I0718 21:25:03.033076    7719 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:25:03.036162    7719 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:25:03.039282    7719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:25:03.040723    7719 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:25:03.044127    7719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:25:03.047443    7719 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:25:03.047512    7719 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:25:03.047565    7719 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:25:03.051987    7719 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:25:03.059103    7719 start.go:297] selected driver: qemu2
	I0718 21:25:03.059109    7719 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:25:03.059119    7719 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:25:03.061609    7719 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:25:03.064176    7719 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:25:03.067298    7719 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:25:03.067365    7719 cni.go:84] Creating CNI manager for "bridge"
	I0718 21:25:03.067370    7719 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:25:03.067412    7719 start.go:340] cluster config:
	{Name:bridge-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:25:03.071287    7719 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:03.077047    7719 out.go:177] * Starting "bridge-736000" primary control-plane node in "bridge-736000" cluster
	I0718 21:25:03.081139    7719 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:25:03.081156    7719 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:25:03.081169    7719 cache.go:56] Caching tarball of preloaded images
	I0718 21:25:03.081239    7719 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:25:03.081244    7719 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:25:03.081305    7719 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/bridge-736000/config.json ...
	I0718 21:25:03.081318    7719 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/bridge-736000/config.json: {Name:mk3d474bdea65cb9caf5995243bc97f7c265b5ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:25:03.081536    7719 start.go:360] acquireMachinesLock for bridge-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:03.081569    7719 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "bridge-736000"
	I0718 21:25:03.081580    7719 start.go:93] Provisioning new machine with config: &{Name:bridge-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:25:03.081631    7719 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:25:03.085049    7719 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:25:03.100628    7719 start.go:159] libmachine.API.Create for "bridge-736000" (driver="qemu2")
	I0718 21:25:03.100656    7719 client.go:168] LocalClient.Create starting
	I0718 21:25:03.100722    7719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:25:03.100751    7719 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:03.100759    7719 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:03.100804    7719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:25:03.100827    7719 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:03.100838    7719 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:03.101261    7719 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:25:03.242650    7719 main.go:141] libmachine: Creating SSH key...
	I0718 21:25:03.340565    7719 main.go:141] libmachine: Creating Disk image...
	I0718 21:25:03.340574    7719 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:25:03.340756    7719 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2
	I0718 21:25:03.350136    7719 main.go:141] libmachine: STDOUT: 
	I0718 21:25:03.350153    7719 main.go:141] libmachine: STDERR: 
	I0718 21:25:03.350201    7719 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2 +20000M
	I0718 21:25:03.358178    7719 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:25:03.358190    7719 main.go:141] libmachine: STDERR: 
	I0718 21:25:03.358208    7719 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2
	I0718 21:25:03.358213    7719 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:25:03.358225    7719 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:03.358255    7719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:39:75:7c:58:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2
	I0718 21:25:03.359894    7719 main.go:141] libmachine: STDOUT: 
	I0718 21:25:03.359907    7719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:03.359922    7719 client.go:171] duration metric: took 259.264042ms to LocalClient.Create
	I0718 21:25:05.362003    7719 start.go:128] duration metric: took 2.280381875s to createHost
	I0718 21:25:05.362035    7719 start.go:83] releasing machines lock for "bridge-736000", held for 2.280485042s
	W0718 21:25:05.362090    7719 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:05.373366    7719 out.go:177] * Deleting "bridge-736000" in qemu2 ...
	W0718 21:25:05.399039    7719 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:05.399060    7719 start.go:729] Will try again in 5 seconds ...
	I0718 21:25:10.401248    7719 start.go:360] acquireMachinesLock for bridge-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:10.401747    7719 start.go:364] duration metric: took 397.25µs to acquireMachinesLock for "bridge-736000"
	I0718 21:25:10.401804    7719 start.go:93] Provisioning new machine with config: &{Name:bridge-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:25:10.402067    7719 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:25:10.412152    7719 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:25:10.461719    7719 start.go:159] libmachine.API.Create for "bridge-736000" (driver="qemu2")
	I0718 21:25:10.461767    7719 client.go:168] LocalClient.Create starting
	I0718 21:25:10.461875    7719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:25:10.461940    7719 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:10.461956    7719 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:10.462029    7719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:25:10.462073    7719 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:10.462084    7719 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:10.462607    7719 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:25:10.611843    7719 main.go:141] libmachine: Creating SSH key...
	I0718 21:25:10.694390    7719 main.go:141] libmachine: Creating Disk image...
	I0718 21:25:10.694400    7719 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:25:10.696217    7719 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2
	I0718 21:25:10.705547    7719 main.go:141] libmachine: STDOUT: 
	I0718 21:25:10.705564    7719 main.go:141] libmachine: STDERR: 
	I0718 21:25:10.705624    7719 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2 +20000M
	I0718 21:25:10.713926    7719 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:25:10.713938    7719 main.go:141] libmachine: STDERR: 
	I0718 21:25:10.713950    7719 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2
	I0718 21:25:10.713954    7719 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:25:10.713962    7719 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:10.713989    7719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:34:0f:6f:16:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/bridge-736000/disk.qcow2
	I0718 21:25:10.715676    7719 main.go:141] libmachine: STDOUT: 
	I0718 21:25:10.715689    7719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:10.715700    7719 client.go:171] duration metric: took 253.931209ms to LocalClient.Create
	I0718 21:25:12.717889    7719 start.go:128] duration metric: took 2.315804083s to createHost
	I0718 21:25:12.717964    7719 start.go:83] releasing machines lock for "bridge-736000", held for 2.316232834s
	W0718 21:25:12.718372    7719 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:12.728083    7719 out.go:177] 
	W0718 21:25:12.734130    7719 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:25:12.734152    7719 out.go:239] * 
	* 
	W0718 21:25:12.736568    7719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:25:12.746062    7719 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.890640541s)

                                                
                                                
-- stdout --
	* [kubenet-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-736000" primary control-plane node in "kubenet-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:25:14.913976    7832 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:25:14.914110    7832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:14.914113    7832 out.go:304] Setting ErrFile to fd 2...
	I0718 21:25:14.914116    7832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:14.914250    7832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:25:14.915337    7832 out.go:298] Setting JSON to false
	I0718 21:25:14.931578    7832 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5082,"bootTime":1721358032,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:25:14.931658    7832 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:25:14.936592    7832 out.go:177] * [kubenet-736000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:25:14.944644    7832 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:25:14.944678    7832 notify.go:220] Checking for updates...
	I0718 21:25:14.952592    7832 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:25:14.955594    7832 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:25:14.958605    7832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:25:14.961565    7832 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:25:14.963069    7832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:25:14.966843    7832 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:25:14.966913    7832 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:25:14.966962    7832 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:25:14.970607    7832 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:25:14.976620    7832 start.go:297] selected driver: qemu2
	I0718 21:25:14.976626    7832 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:25:14.976633    7832 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:25:14.978844    7832 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:25:14.982579    7832 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:25:14.985685    7832 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:25:14.985699    7832 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0718 21:25:14.985718    7832 start.go:340] cluster config:
	{Name:kubenet-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:25:14.989143    7832 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:14.996606    7832 out.go:177] * Starting "kubenet-736000" primary control-plane node in "kubenet-736000" cluster
	I0718 21:25:15.000552    7832 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:25:15.000566    7832 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:25:15.000574    7832 cache.go:56] Caching tarball of preloaded images
	I0718 21:25:15.000630    7832 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:25:15.000635    7832 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:25:15.000687    7832 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/kubenet-736000/config.json ...
	I0718 21:25:15.000699    7832 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/kubenet-736000/config.json: {Name:mkbbfb0ff28a665d8336c70b9a3a4322b8de0917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:25:15.000898    7832 start.go:360] acquireMachinesLock for kubenet-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:15.000928    7832 start.go:364] duration metric: took 24.959µs to acquireMachinesLock for "kubenet-736000"
	I0718 21:25:15.000939    7832 start.go:93] Provisioning new machine with config: &{Name:kubenet-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:25:15.000964    7832 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:25:15.008539    7832 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:25:15.023542    7832 start.go:159] libmachine.API.Create for "kubenet-736000" (driver="qemu2")
	I0718 21:25:15.023566    7832 client.go:168] LocalClient.Create starting
	I0718 21:25:15.023628    7832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:25:15.023657    7832 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:15.023672    7832 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:15.023704    7832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:25:15.023726    7832 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:15.023732    7832 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:15.024101    7832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:25:15.162316    7832 main.go:141] libmachine: Creating SSH key...
	I0718 21:25:15.283136    7832 main.go:141] libmachine: Creating Disk image...
	I0718 21:25:15.283142    7832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:25:15.283328    7832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2
	I0718 21:25:15.292606    7832 main.go:141] libmachine: STDOUT: 
	I0718 21:25:15.292629    7832 main.go:141] libmachine: STDERR: 
	I0718 21:25:15.292679    7832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2 +20000M
	I0718 21:25:15.300538    7832 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:25:15.300553    7832 main.go:141] libmachine: STDERR: 
	I0718 21:25:15.300573    7832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2
	I0718 21:25:15.300581    7832 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:25:15.300595    7832 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:15.300638    7832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:d2:27:64:cc:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2
	I0718 21:25:15.302224    7832 main.go:141] libmachine: STDOUT: 
	I0718 21:25:15.302237    7832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:15.302263    7832 client.go:171] duration metric: took 278.691667ms to LocalClient.Create
	I0718 21:25:17.304391    7832 start.go:128] duration metric: took 2.303451s to createHost
	I0718 21:25:17.304449    7832 start.go:83] releasing machines lock for "kubenet-736000", held for 2.303558125s
	W0718 21:25:17.304523    7832 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:17.312457    7832 out.go:177] * Deleting "kubenet-736000" in qemu2 ...
	W0718 21:25:17.334379    7832 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:17.334402    7832 start.go:729] Will try again in 5 seconds ...
	I0718 21:25:22.335215    7832 start.go:360] acquireMachinesLock for kubenet-736000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:22.335436    7832 start.go:364] duration metric: took 179.625µs to acquireMachinesLock for "kubenet-736000"
	I0718 21:25:22.335496    7832 start.go:93] Provisioning new machine with config: &{Name:kubenet-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:25:22.335708    7832 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:25:22.344150    7832 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:25:22.374330    7832 start.go:159] libmachine.API.Create for "kubenet-736000" (driver="qemu2")
	I0718 21:25:22.374366    7832 client.go:168] LocalClient.Create starting
	I0718 21:25:22.374455    7832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:25:22.374514    7832 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:22.374528    7832 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:22.374576    7832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:25:22.374608    7832 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:22.374617    7832 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:22.375048    7832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:25:22.516574    7832 main.go:141] libmachine: Creating SSH key...
	I0718 21:25:22.715179    7832 main.go:141] libmachine: Creating Disk image...
	I0718 21:25:22.715190    7832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:25:22.715405    7832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2
	I0718 21:25:22.725026    7832 main.go:141] libmachine: STDOUT: 
	I0718 21:25:22.725052    7832 main.go:141] libmachine: STDERR: 
	I0718 21:25:22.725101    7832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2 +20000M
	I0718 21:25:22.733156    7832 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:25:22.733171    7832 main.go:141] libmachine: STDERR: 
	I0718 21:25:22.733189    7832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2
	I0718 21:25:22.733195    7832 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:25:22.733206    7832 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:22.733231    7832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:d4:b0:9f:b7:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/kubenet-736000/disk.qcow2
	I0718 21:25:22.734837    7832 main.go:141] libmachine: STDOUT: 
	I0718 21:25:22.734891    7832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:22.734903    7832 client.go:171] duration metric: took 360.53975ms to LocalClient.Create
	I0718 21:25:24.737100    7832 start.go:128] duration metric: took 2.401397583s to createHost
	I0718 21:25:24.737186    7832 start.go:83] releasing machines lock for "kubenet-736000", held for 2.401786458s
	W0718 21:25:24.737623    7832 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:24.747319    7832 out.go:177] 
	W0718 21:25:24.753341    7832 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:25:24.753371    7832 out.go:239] * 
	* 
	W0718 21:25:24.755928    7832 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:25:24.763291    7832 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-969000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-969000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.937464917s)

                                                
                                                
-- stdout --
	* [old-k8s-version-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-969000" primary control-plane node in "old-k8s-version-969000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:25:27.000001    7945 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:25:27.000162    7945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:27.000166    7945 out.go:304] Setting ErrFile to fd 2...
	I0718 21:25:27.000168    7945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:27.000293    7945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:25:27.001462    7945 out.go:298] Setting JSON to false
	I0718 21:25:27.018422    7945 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5095,"bootTime":1721358032,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:25:27.018491    7945 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:25:27.023055    7945 out.go:177] * [old-k8s-version-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:25:27.031067    7945 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:25:27.031091    7945 notify.go:220] Checking for updates...
	I0718 21:25:27.038052    7945 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:25:27.041084    7945 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:25:27.044079    7945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:25:27.046996    7945 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:25:27.050109    7945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:25:27.066523    7945 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:25:27.066603    7945 config.go:182] Loaded profile config "stopped-upgrade-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0718 21:25:27.066660    7945 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:25:27.071039    7945 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:25:27.078025    7945 start.go:297] selected driver: qemu2
	I0718 21:25:27.078032    7945 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:25:27.078039    7945 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:25:27.080583    7945 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:25:27.084052    7945 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:25:27.087165    7945 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:25:27.087211    7945 cni.go:84] Creating CNI manager for ""
	I0718 21:25:27.087218    7945 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0718 21:25:27.087248    7945 start.go:340] cluster config:
	{Name:old-k8s-version-969000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:25:27.090942    7945 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:27.098055    7945 out.go:177] * Starting "old-k8s-version-969000" primary control-plane node in "old-k8s-version-969000" cluster
	I0718 21:25:27.102016    7945 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 21:25:27.102031    7945 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0718 21:25:27.102041    7945 cache.go:56] Caching tarball of preloaded images
	I0718 21:25:27.102105    7945 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:25:27.102110    7945 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0718 21:25:27.102172    7945 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/old-k8s-version-969000/config.json ...
	I0718 21:25:27.102187    7945 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/old-k8s-version-969000/config.json: {Name:mk24427200d7cb44add65a7ceb6905b9b24b77b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:25:27.102522    7945 start.go:360] acquireMachinesLock for old-k8s-version-969000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:27.102557    7945 start.go:364] duration metric: took 27.791µs to acquireMachinesLock for "old-k8s-version-969000"
	I0718 21:25:27.102568    7945 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:25:27.102601    7945 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:25:27.107073    7945 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:25:27.124123    7945 start.go:159] libmachine.API.Create for "old-k8s-version-969000" (driver="qemu2")
	I0718 21:25:27.124152    7945 client.go:168] LocalClient.Create starting
	I0718 21:25:27.124219    7945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:25:27.124251    7945 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:27.124261    7945 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:27.124301    7945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:25:27.124337    7945 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:27.124347    7945 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:27.124750    7945 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:25:27.264736    7945 main.go:141] libmachine: Creating SSH key...
	I0718 21:25:27.418223    7945 main.go:141] libmachine: Creating Disk image...
	I0718 21:25:27.418231    7945 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:25:27.418433    7945 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2
	I0718 21:25:27.428583    7945 main.go:141] libmachine: STDOUT: 
	I0718 21:25:27.428604    7945 main.go:141] libmachine: STDERR: 
	I0718 21:25:27.428686    7945 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2 +20000M
	I0718 21:25:27.437341    7945 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:25:27.437356    7945 main.go:141] libmachine: STDERR: 
	I0718 21:25:27.437371    7945 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2
	I0718 21:25:27.437375    7945 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:25:27.437388    7945 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:27.437422    7945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:de:1a:91:02:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2
	I0718 21:25:27.439239    7945 main.go:141] libmachine: STDOUT: 
	I0718 21:25:27.439253    7945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:27.439268    7945 client.go:171] duration metric: took 315.12ms to LocalClient.Create
	I0718 21:25:29.441412    7945 start.go:128] duration metric: took 2.338834084s to createHost
	I0718 21:25:29.441476    7945 start.go:83] releasing machines lock for "old-k8s-version-969000", held for 2.338966083s
	W0718 21:25:29.441570    7945 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:29.454229    7945 out.go:177] * Deleting "old-k8s-version-969000" in qemu2 ...
	W0718 21:25:29.476559    7945 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:29.476582    7945 start.go:729] Will try again in 5 seconds ...
	I0718 21:25:34.478665    7945 start.go:360] acquireMachinesLock for old-k8s-version-969000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:34.479338    7945 start.go:364] duration metric: took 547.375µs to acquireMachinesLock for "old-k8s-version-969000"
	I0718 21:25:34.479488    7945 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:25:34.479757    7945 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:25:34.489445    7945 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:25:34.538606    7945 start.go:159] libmachine.API.Create for "old-k8s-version-969000" (driver="qemu2")
	I0718 21:25:34.538650    7945 client.go:168] LocalClient.Create starting
	I0718 21:25:34.538761    7945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:25:34.538839    7945 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:34.538855    7945 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:34.538945    7945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:25:34.539010    7945 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:34.539023    7945 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:34.539582    7945 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:25:34.690890    7945 main.go:141] libmachine: Creating SSH key...
	I0718 21:25:34.847653    7945 main.go:141] libmachine: Creating Disk image...
	I0718 21:25:34.847662    7945 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:25:34.847853    7945 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2
	I0718 21:25:34.857368    7945 main.go:141] libmachine: STDOUT: 
	I0718 21:25:34.857384    7945 main.go:141] libmachine: STDERR: 
	I0718 21:25:34.857431    7945 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2 +20000M
	I0718 21:25:34.865278    7945 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:25:34.865293    7945 main.go:141] libmachine: STDERR: 
	I0718 21:25:34.865303    7945 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2
	I0718 21:25:34.865309    7945 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:25:34.865327    7945 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:34.865361    7945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:55:5e:f2:6f:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2
	I0718 21:25:34.867111    7945 main.go:141] libmachine: STDOUT: 
	I0718 21:25:34.867125    7945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:34.867137    7945 client.go:171] duration metric: took 328.491292ms to LocalClient.Create
	I0718 21:25:36.869311    7945 start.go:128] duration metric: took 2.38957075s to createHost
	I0718 21:25:36.869390    7945 start.go:83] releasing machines lock for "old-k8s-version-969000", held for 2.39008725s
	W0718 21:25:36.869851    7945 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:36.880677    7945 out.go:177] 
	W0718 21:25:36.884697    7945 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:25:36.884843    7945 out.go:239] * 
	* 
	W0718 21:25:36.887692    7945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:25:36.895632    7945 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-969000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000: exit status 7 (65.122959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-969000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-969000 create -f testdata/busybox.yaml: exit status 1 (30.065458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-969000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000: exit status 7 (30.412125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000: exit status 7 (28.597625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-969000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-969000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-969000 describe deploy/metrics-server -n kube-system: exit status 1 (26.626166ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-969000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000: exit status 7 (29.218166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-969000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-969000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.189925708s)

                                                
                                                
-- stdout --
	* [old-k8s-version-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-969000" primary control-plane node in "old-k8s-version-969000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:25:40.998525    7995 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:25:40.998664    7995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:40.998669    7995 out.go:304] Setting ErrFile to fd 2...
	I0718 21:25:40.998671    7995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:40.998801    7995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:25:40.999820    7995 out.go:298] Setting JSON to false
	I0718 21:25:41.015763    7995 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5109,"bootTime":1721358032,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:25:41.015835    7995 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:25:41.021160    7995 out.go:177] * [old-k8s-version-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:25:41.028209    7995 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:25:41.028263    7995 notify.go:220] Checking for updates...
	I0718 21:25:41.035201    7995 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:25:41.038135    7995 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:25:41.041225    7995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:25:41.044224    7995 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:25:41.045655    7995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:25:41.049433    7995 config.go:182] Loaded profile config "old-k8s-version-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0718 21:25:41.053238    7995 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0718 21:25:41.056225    7995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:25:41.060145    7995 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 21:25:41.067250    7995 start.go:297] selected driver: qemu2
	I0718 21:25:41.067258    7995 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:25:41.067332    7995 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:25:41.069660    7995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:25:41.069686    7995 cni.go:84] Creating CNI manager for ""
	I0718 21:25:41.069692    7995 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0718 21:25:41.069719    7995 start.go:340] cluster config:
	{Name:old-k8s-version-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-969000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:25:41.073319    7995 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:41.081210    7995 out.go:177] * Starting "old-k8s-version-969000" primary control-plane node in "old-k8s-version-969000" cluster
	I0718 21:25:41.085135    7995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 21:25:41.085150    7995 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0718 21:25:41.085161    7995 cache.go:56] Caching tarball of preloaded images
	I0718 21:25:41.085226    7995 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:25:41.085231    7995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0718 21:25:41.085276    7995 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/old-k8s-version-969000/config.json ...
	I0718 21:25:41.085744    7995 start.go:360] acquireMachinesLock for old-k8s-version-969000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:41.085776    7995 start.go:364] duration metric: took 25.083µs to acquireMachinesLock for "old-k8s-version-969000"
	I0718 21:25:41.085786    7995 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:25:41.085792    7995 fix.go:54] fixHost starting: 
	I0718 21:25:41.085907    7995 fix.go:112] recreateIfNeeded on old-k8s-version-969000: state=Stopped err=<nil>
	W0718 21:25:41.085915    7995 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:25:41.089224    7995 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-969000" ...
	I0718 21:25:41.097168    7995 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:41.097200    7995 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:55:5e:f2:6f:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2
	I0718 21:25:41.099201    7995 main.go:141] libmachine: STDOUT: 
	I0718 21:25:41.099221    7995 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:41.099258    7995 fix.go:56] duration metric: took 13.466833ms for fixHost
	I0718 21:25:41.099262    7995 start.go:83] releasing machines lock for "old-k8s-version-969000", held for 13.4815ms
	W0718 21:25:41.099268    7995 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:25:41.099301    7995 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:41.099305    7995 start.go:729] Will try again in 5 seconds ...
	I0718 21:25:46.101337    7995 start.go:360] acquireMachinesLock for old-k8s-version-969000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:46.101788    7995 start.go:364] duration metric: took 367.792µs to acquireMachinesLock for "old-k8s-version-969000"
	I0718 21:25:46.101905    7995 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:25:46.101927    7995 fix.go:54] fixHost starting: 
	I0718 21:25:46.102668    7995 fix.go:112] recreateIfNeeded on old-k8s-version-969000: state=Stopped err=<nil>
	W0718 21:25:46.102695    7995 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:25:46.107146    7995 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-969000" ...
	I0718 21:25:46.115126    7995 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:46.115310    7995 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:55:5e:f2:6f:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/old-k8s-version-969000/disk.qcow2
	I0718 21:25:46.125143    7995 main.go:141] libmachine: STDOUT: 
	I0718 21:25:46.125207    7995 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:46.125305    7995 fix.go:56] duration metric: took 23.366667ms for fixHost
	I0718 21:25:46.125326    7995 start.go:83] releasing machines lock for "old-k8s-version-969000", held for 23.517541ms
	W0718 21:25:46.125498    7995 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:46.134171    7995 out.go:177] 
	W0718 21:25:46.137156    7995 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:25:46.137180    7995 out.go:239] * 
	* 
	W0718 21:25:46.139968    7995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:25:46.149103    7995 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-969000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000: exit status 7 (65.743167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-436000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-436000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (10.129499459s)

                                                
                                                
-- stdout --
	* [no-preload-436000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-436000" primary control-plane node in "no-preload-436000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-436000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:25:41.388577    8005 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:25:41.388716    8005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:41.388720    8005 out.go:304] Setting ErrFile to fd 2...
	I0718 21:25:41.388722    8005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:41.388852    8005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:25:41.389772    8005 out.go:298] Setting JSON to false
	I0718 21:25:41.405712    8005 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5109,"bootTime":1721358032,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:25:41.405777    8005 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:25:41.409271    8005 out.go:177] * [no-preload-436000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:25:41.416371    8005 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:25:41.416483    8005 notify.go:220] Checking for updates...
	I0718 21:25:41.422339    8005 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:25:41.425366    8005 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:25:41.426534    8005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:25:41.429334    8005 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:25:41.432496    8005 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:25:41.435760    8005 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:25:41.435852    8005 config.go:182] Loaded profile config "old-k8s-version-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0718 21:25:41.435900    8005 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:25:41.440267    8005 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:25:41.447379    8005 start.go:297] selected driver: qemu2
	I0718 21:25:41.447386    8005 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:25:41.447393    8005 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:25:41.449644    8005 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:25:41.452335    8005 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:25:41.455457    8005 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:25:41.455517    8005 cni.go:84] Creating CNI manager for ""
	I0718 21:25:41.455526    8005 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:25:41.455533    8005 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:25:41.455560    8005 start.go:340] cluster config:
	{Name:no-preload-436000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:25:41.459367    8005 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:41.466325    8005 out.go:177] * Starting "no-preload-436000" primary control-plane node in "no-preload-436000" cluster
	I0718 21:25:41.470371    8005 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 21:25:41.470429    8005 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/no-preload-436000/config.json ...
	I0718 21:25:41.470445    8005 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/no-preload-436000/config.json: {Name:mkbebaf727b9b03b9123683a121e52a15b1a1831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:25:41.470453    8005 cache.go:107] acquiring lock: {Name:mk538a76863935988285d11f5e65da707adf42e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:41.470452    8005 cache.go:107] acquiring lock: {Name:mk67b75e898accbb5972f8d51a2c9110da933059 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:41.470463    8005 cache.go:107] acquiring lock: {Name:mk57b13e2f5dbbd122340920059cb05e82ebc2df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:41.470514    8005 cache.go:115] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0718 21:25:41.470523    8005 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70µs
	I0718 21:25:41.470530    8005 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0718 21:25:41.470549    8005 cache.go:107] acquiring lock: {Name:mk4abfe95e918e3f025f575f8f2a4f61553c68e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:41.470606    8005 cache.go:107] acquiring lock: {Name:mk9a573cfb1af555dc58451597ba91b25490ae9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:41.470628    8005 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0718 21:25:41.470645    8005 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0718 21:25:41.470667    8005 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0718 21:25:41.470644    8005 cache.go:107] acquiring lock: {Name:mk83dc5c223dc577731cb4743008707595383e6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:41.470677    8005 cache.go:107] acquiring lock: {Name:mkb1cce8d54c03d778460e2aa3504bd926f7ea18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:41.470725    8005 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0718 21:25:41.470750    8005 start.go:360] acquireMachinesLock for no-preload-436000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:41.470735    8005 cache.go:107] acquiring lock: {Name:mk30e3561d422fad73565322ebcce9e8ba0d5ab1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:41.470819    8005 start.go:364] duration metric: took 62.667µs to acquireMachinesLock for "no-preload-436000"
	I0718 21:25:41.470833    8005 start.go:93] Provisioning new machine with config: &{Name:no-preload-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:25:41.470883    8005 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:25:41.470905    8005 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0718 21:25:41.470907    8005 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0718 21:25:41.470992    8005 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0718 21:25:41.475331    8005 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:25:41.482173    8005 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0718 21:25:41.482219    8005 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0718 21:25:41.482246    8005 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0718 21:25:41.483757    8005 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0718 21:25:41.483757    8005 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0718 21:25:41.483913    8005 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0718 21:25:41.483939    8005 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0718 21:25:41.493208    8005 start.go:159] libmachine.API.Create for "no-preload-436000" (driver="qemu2")
	I0718 21:25:41.493228    8005 client.go:168] LocalClient.Create starting
	I0718 21:25:41.493307    8005 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:25:41.493340    8005 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:41.493348    8005 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:41.493387    8005 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:25:41.493424    8005 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:41.493435    8005 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:41.493770    8005 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:25:41.635911    8005 main.go:141] libmachine: Creating SSH key...
	I0718 21:25:41.784019    8005 main.go:141] libmachine: Creating Disk image...
	I0718 21:25:41.784036    8005 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:25:41.784231    8005 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2
	I0718 21:25:41.793629    8005 main.go:141] libmachine: STDOUT: 
	I0718 21:25:41.793648    8005 main.go:141] libmachine: STDERR: 
	I0718 21:25:41.793695    8005 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2 +20000M
	I0718 21:25:41.801909    8005 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:25:41.801926    8005 main.go:141] libmachine: STDERR: 
	I0718 21:25:41.801945    8005 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2
	I0718 21:25:41.801948    8005 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:25:41.801958    8005 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:41.801984    8005 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:e9:85:63:6e:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2
	I0718 21:25:41.803833    8005 main.go:141] libmachine: STDOUT: 
	I0718 21:25:41.803853    8005 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:41.803876    8005 client.go:171] duration metric: took 310.65025ms to LocalClient.Create
	I0718 21:25:41.907735    8005 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0718 21:25:41.916074    8005 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0718 21:25:41.919182    8005 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0718 21:25:41.924734    8005 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0718 21:25:41.943354    8005 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0718 21:25:41.962039    8005 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0718 21:25:42.009600    8005 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0718 21:25:42.142547    8005 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0718 21:25:42.142612    8005 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 672.042708ms
	I0718 21:25:42.142640    8005 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0718 21:25:43.804254    8005 start.go:128] duration metric: took 2.333402584s to createHost
	I0718 21:25:43.804337    8005 start.go:83] releasing machines lock for "no-preload-436000", held for 2.333568042s
	W0718 21:25:43.804402    8005 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:43.821125    8005 out.go:177] * Deleting "no-preload-436000" in qemu2 ...
	W0718 21:25:43.847902    8005 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:43.847933    8005 start.go:729] Will try again in 5 seconds ...
	I0718 21:25:44.479668    8005 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0718 21:25:44.479739    8005 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.0092405s
	I0718 21:25:44.479770    8005 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0718 21:25:45.233898    8005 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0718 21:25:45.233954    8005 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.763435916s
	I0718 21:25:45.233987    8005 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0718 21:25:45.596703    8005 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0718 21:25:45.596758    8005 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.12640325s
	I0718 21:25:45.596784    8005 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0718 21:25:45.761780    8005 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0718 21:25:45.761845    8005 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.291217209s
	I0718 21:25:45.761881    8005 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0718 21:25:46.825419    8005 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0718 21:25:46.825433    8005 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 5.355123917s
	I0718 21:25:46.825445    8005 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0718 21:25:48.848101    8005 start.go:360] acquireMachinesLock for no-preload-436000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:49.095789    8005 start.go:364] duration metric: took 247.604791ms to acquireMachinesLock for "no-preload-436000"
	I0718 21:25:49.095948    8005 start.go:93] Provisioning new machine with config: &{Name:no-preload-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:25:49.096202    8005 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:25:49.108829    8005 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:25:49.162090    8005 start.go:159] libmachine.API.Create for "no-preload-436000" (driver="qemu2")
	I0718 21:25:49.162223    8005 client.go:168] LocalClient.Create starting
	I0718 21:25:49.162385    8005 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:25:49.162453    8005 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:49.162474    8005 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:49.162564    8005 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:25:49.162609    8005 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:49.162621    8005 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:49.163133    8005 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:25:49.315104    8005 main.go:141] libmachine: Creating SSH key...
	I0718 21:25:49.428057    8005 main.go:141] libmachine: Creating Disk image...
	I0718 21:25:49.428063    8005 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:25:49.428232    8005 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2
	I0718 21:25:49.437815    8005 main.go:141] libmachine: STDOUT: 
	I0718 21:25:49.437885    8005 main.go:141] libmachine: STDERR: 
	I0718 21:25:49.437947    8005 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2 +20000M
	I0718 21:25:49.446018    8005 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:25:49.446046    8005 main.go:141] libmachine: STDERR: 
	I0718 21:25:49.446057    8005 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2
	I0718 21:25:49.446062    8005 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:25:49.446074    8005 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:49.446124    8005 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:18:17:45:2f:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2
	I0718 21:25:49.447825    8005 main.go:141] libmachine: STDOUT: 
	I0718 21:25:49.447889    8005 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:49.447902    8005 client.go:171] duration metric: took 285.680416ms to LocalClient.Create
	I0718 21:25:51.119999    8005 cache.go:157] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0718 21:25:51.120075    8005 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 9.649777458s
	I0718 21:25:51.120101    8005 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0718 21:25:51.120145    8005 cache.go:87] Successfully saved all images to host disk.
	I0718 21:25:51.450087    8005 start.go:128] duration metric: took 2.353913125s to createHost
	I0718 21:25:51.450176    8005 start.go:83] releasing machines lock for "no-preload-436000", held for 2.354412458s
	W0718 21:25:51.450461    8005 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-436000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-436000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:51.458828    8005 out.go:177] 
	W0718 21:25:51.462808    8005 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:25:51.462831    8005 out.go:239] * 
	* 
	W0718 21:25:51.465285    8005 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:25:51.473778    8005 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-436000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (62.989625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-969000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000: exit status 7 (31.384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-969000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.586666ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000: exit status 7 (28.018084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-969000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000: exit status 7 (28.596458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-969000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-969000 --alsologtostderr -v=1: exit status 83 (44.859042ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-969000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-969000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:25:46.411924    8058 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:25:46.412312    8058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:46.412315    8058 out.go:304] Setting ErrFile to fd 2...
	I0718 21:25:46.412318    8058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:46.412466    8058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:25:46.412680    8058 out.go:298] Setting JSON to false
	I0718 21:25:46.412687    8058 mustload.go:65] Loading cluster: old-k8s-version-969000
	I0718 21:25:46.412879    8058 config.go:182] Loaded profile config "old-k8s-version-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0718 21:25:46.417565    8058 out.go:177] * The control-plane node old-k8s-version-969000 host is not running: state=Stopped
	I0718 21:25:46.425565    8058 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-969000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-969000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000: exit status 7 (28.495416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000: exit status 7 (28.63875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-489000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-489000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.8158795s)

                                                
                                                
-- stdout --
	* [embed-certs-489000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-489000" primary control-plane node in "embed-certs-489000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-489000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:25:46.723526    8075 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:25:46.723640    8075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:46.723643    8075 out.go:304] Setting ErrFile to fd 2...
	I0718 21:25:46.723646    8075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:46.723772    8075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:25:46.724813    8075 out.go:298] Setting JSON to false
	I0718 21:25:46.740795    8075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5114,"bootTime":1721358032,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:25:46.740864    8075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:25:46.745411    8075 out.go:177] * [embed-certs-489000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:25:46.752676    8075 notify.go:220] Checking for updates...
	I0718 21:25:46.755584    8075 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:25:46.767533    8075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:25:46.774574    8075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:25:46.782584    8075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:25:46.789547    8075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:25:46.796615    8075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:25:46.800681    8075 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:25:46.800744    8075 config.go:182] Loaded profile config "no-preload-436000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0718 21:25:46.800793    8075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:25:46.804554    8075 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:25:46.811419    8075 start.go:297] selected driver: qemu2
	I0718 21:25:46.811424    8075 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:25:46.811429    8075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:25:46.813800    8075 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:25:46.817590    8075 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:25:46.821664    8075 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:25:46.821693    8075 cni.go:84] Creating CNI manager for ""
	I0718 21:25:46.821701    8075 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:25:46.821705    8075 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:25:46.821731    8075 start.go:340] cluster config:
	{Name:embed-certs-489000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-489000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:25:46.825476    8075 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:46.833412    8075 out.go:177] * Starting "embed-certs-489000" primary control-plane node in "embed-certs-489000" cluster
	I0718 21:25:46.837577    8075 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:25:46.837592    8075 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:25:46.837602    8075 cache.go:56] Caching tarball of preloaded images
	I0718 21:25:46.837656    8075 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:25:46.837661    8075 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:25:46.837729    8075 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/embed-certs-489000/config.json ...
	I0718 21:25:46.837742    8075 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/embed-certs-489000/config.json: {Name:mk2398238fad07d6b993b3bb87729f621336d551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:25:46.837960    8075 start.go:360] acquireMachinesLock for embed-certs-489000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:46.837995    8075 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "embed-certs-489000"
	I0718 21:25:46.838006    8075 start.go:93] Provisioning new machine with config: &{Name:embed-certs-489000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-489000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:25:46.838040    8075 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:25:46.846554    8075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:25:46.865199    8075 start.go:159] libmachine.API.Create for "embed-certs-489000" (driver="qemu2")
	I0718 21:25:46.865232    8075 client.go:168] LocalClient.Create starting
	I0718 21:25:46.865305    8075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:25:46.865335    8075 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:46.865343    8075 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:46.865378    8075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:25:46.865402    8075 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:46.865420    8075 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:46.865777    8075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:25:46.994162    8075 main.go:141] libmachine: Creating SSH key...
	I0718 21:25:47.073859    8075 main.go:141] libmachine: Creating Disk image...
	I0718 21:25:47.073865    8075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:25:47.074019    8075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2
	I0718 21:25:47.083636    8075 main.go:141] libmachine: STDOUT: 
	I0718 21:25:47.083655    8075 main.go:141] libmachine: STDERR: 
	I0718 21:25:47.083715    8075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2 +20000M
	I0718 21:25:47.091723    8075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:25:47.091739    8075 main.go:141] libmachine: STDERR: 
	I0718 21:25:47.091763    8075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2
	I0718 21:25:47.091768    8075 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:25:47.091780    8075 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:47.091809    8075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:03:e8:87:0d:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2
	I0718 21:25:47.093460    8075 main.go:141] libmachine: STDOUT: 
	I0718 21:25:47.093479    8075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:47.093498    8075 client.go:171] duration metric: took 228.267125ms to LocalClient.Create
	I0718 21:25:49.095607    8075 start.go:128] duration metric: took 2.257605541s to createHost
	I0718 21:25:49.095682    8075 start.go:83] releasing machines lock for "embed-certs-489000", held for 2.257734458s
	W0718 21:25:49.095736    8075 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:49.118844    8075 out.go:177] * Deleting "embed-certs-489000" in qemu2 ...
	W0718 21:25:49.137135    8075 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:49.137164    8075 start.go:729] Will try again in 5 seconds ...
	I0718 21:25:54.139281    8075 start.go:360] acquireMachinesLock for embed-certs-489000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:54.139759    8075 start.go:364] duration metric: took 377.25µs to acquireMachinesLock for "embed-certs-489000"
	I0718 21:25:54.139837    8075 start.go:93] Provisioning new machine with config: &{Name:embed-certs-489000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-489000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:25:54.140105    8075 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:25:54.145678    8075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:25:54.196861    8075 start.go:159] libmachine.API.Create for "embed-certs-489000" (driver="qemu2")
	I0718 21:25:54.196910    8075 client.go:168] LocalClient.Create starting
	I0718 21:25:54.197029    8075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:25:54.197081    8075 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:54.197099    8075 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:54.197165    8075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:25:54.197196    8075 main.go:141] libmachine: Decoding PEM data...
	I0718 21:25:54.197210    8075 main.go:141] libmachine: Parsing certificate...
	I0718 21:25:54.197771    8075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:25:54.339135    8075 main.go:141] libmachine: Creating SSH key...
	I0718 21:25:54.438606    8075 main.go:141] libmachine: Creating Disk image...
	I0718 21:25:54.438611    8075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:25:54.438789    8075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2
	I0718 21:25:54.448181    8075 main.go:141] libmachine: STDOUT: 
	I0718 21:25:54.448202    8075 main.go:141] libmachine: STDERR: 
	I0718 21:25:54.448252    8075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2 +20000M
	I0718 21:25:54.456129    8075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:25:54.456149    8075 main.go:141] libmachine: STDERR: 
	I0718 21:25:54.456164    8075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2
	I0718 21:25:54.456169    8075 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:25:54.456177    8075 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:54.456207    8075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:24:06:1d:92:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2
	I0718 21:25:54.457929    8075 main.go:141] libmachine: STDOUT: 
	I0718 21:25:54.457950    8075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:54.457969    8075 client.go:171] duration metric: took 261.059083ms to LocalClient.Create
	I0718 21:25:56.460113    8075 start.go:128] duration metric: took 2.320032916s to createHost
	I0718 21:25:56.460278    8075 start.go:83] releasing machines lock for "embed-certs-489000", held for 2.32048125s
	W0718 21:25:56.460564    8075 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-489000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-489000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:56.472996    8075 out.go:177] 
	W0718 21:25:56.479168    8075 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:25:56.479193    8075 out.go:239] * 
	* 
	W0718 21:25:56.481696    8075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:25:56.494081    8075 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-489000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (63.708625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-436000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-436000 create -f testdata/busybox.yaml: exit status 1 (30.099084ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-436000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-436000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (28.391583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-436000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (28.489458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-436000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-436000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-436000 describe deploy/metrics-server -n kube-system: exit status 1 (26.62575ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-436000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-436000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (27.944459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-436000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-436000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (6.10822975s)

                                                
                                                
-- stdout --
	* [no-preload-436000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-436000" primary control-plane node in "no-preload-436000" cluster
	* Restarting existing qemu2 VM for "no-preload-436000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-436000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:25:55.472523    8127 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:25:55.472638    8127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:55.472642    8127 out.go:304] Setting ErrFile to fd 2...
	I0718 21:25:55.472645    8127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:25:55.472767    8127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:25:55.473718    8127 out.go:298] Setting JSON to false
	I0718 21:25:55.489699    8127 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5123,"bootTime":1721358032,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:25:55.489772    8127 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:25:55.494595    8127 out.go:177] * [no-preload-436000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:25:55.502579    8127 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:25:55.502635    8127 notify.go:220] Checking for updates...
	I0718 21:25:55.508579    8127 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:25:55.511579    8127 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:25:55.512988    8127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:25:55.515510    8127 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:25:55.518565    8127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:25:55.521923    8127 config.go:182] Loaded profile config "no-preload-436000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0718 21:25:55.522174    8127 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:25:55.526454    8127 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 21:25:55.533589    8127 start.go:297] selected driver: qemu2
	I0718 21:25:55.533600    8127 start.go:901] validating driver "qemu2" against &{Name:no-preload-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:25:55.533675    8127 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:25:55.535881    8127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:25:55.535929    8127 cni.go:84] Creating CNI manager for ""
	I0718 21:25:55.535937    8127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:25:55.535962    8127 start.go:340] cluster config:
	{Name:no-preload-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-436000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:25:55.539422    8127 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:55.546529    8127 out.go:177] * Starting "no-preload-436000" primary control-plane node in "no-preload-436000" cluster
	I0718 21:25:55.550470    8127 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 21:25:55.550545    8127 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/no-preload-436000/config.json ...
	I0718 21:25:55.550576    8127 cache.go:107] acquiring lock: {Name:mk67b75e898accbb5972f8d51a2c9110da933059 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:55.550594    8127 cache.go:107] acquiring lock: {Name:mk57b13e2f5dbbd122340920059cb05e82ebc2df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:55.550607    8127 cache.go:107] acquiring lock: {Name:mkb1cce8d54c03d778460e2aa3504bd926f7ea18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:55.550592    8127 cache.go:107] acquiring lock: {Name:mk538a76863935988285d11f5e65da707adf42e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:55.550662    8127 cache.go:115] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0718 21:25:55.550671    8127 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 97µs
	I0718 21:25:55.550678    8127 cache.go:115] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0718 21:25:55.550696    8127 cache.go:107] acquiring lock: {Name:mk9a573cfb1af555dc58451597ba91b25490ae9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:55.550699    8127 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 117.833µs
	I0718 21:25:55.550693    8127 cache.go:107] acquiring lock: {Name:mk4abfe95e918e3f025f575f8f2a4f61553c68e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:55.550705    8127 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0718 21:25:55.550689    8127 cache.go:115] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0718 21:25:55.550726    8127 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 119.458µs
	I0718 21:25:55.550732    8127 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0718 21:25:55.550682    8127 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0718 21:25:55.550740    8127 cache.go:115] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0718 21:25:55.550745    8127 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 50.291µs
	I0718 21:25:55.550752    8127 cache.go:115] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0718 21:25:55.550754    8127 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0718 21:25:55.550757    8127 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 64.667µs
	I0718 21:25:55.550760    8127 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0718 21:25:55.550774    8127 cache.go:107] acquiring lock: {Name:mk30e3561d422fad73565322ebcce9e8ba0d5ab1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:55.550808    8127 cache.go:115] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0718 21:25:55.550816    8127 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 225.583µs
	I0718 21:25:55.550813    8127 cache.go:107] acquiring lock: {Name:mk83dc5c223dc577731cb4743008707595383e6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:25:55.550822    8127 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0718 21:25:55.550832    8127 cache.go:115] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0718 21:25:55.550836    8127 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 92.541µs
	I0718 21:25:55.550841    8127 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0718 21:25:55.550864    8127 cache.go:115] /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0718 21:25:55.550868    8127 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 129.25µs
	I0718 21:25:55.550875    8127 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0718 21:25:55.550880    8127 cache.go:87] Successfully saved all images to host disk.
	I0718 21:25:55.550981    8127 start.go:360] acquireMachinesLock for no-preload-436000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:25:56.460464    8127 start.go:364] duration metric: took 909.448458ms to acquireMachinesLock for "no-preload-436000"
	I0718 21:25:56.460658    8127 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:25:56.460696    8127 fix.go:54] fixHost starting: 
	I0718 21:25:56.461340    8127 fix.go:112] recreateIfNeeded on no-preload-436000: state=Stopped err=<nil>
	W0718 21:25:56.461381    8127 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:25:56.476073    8127 out.go:177] * Restarting existing qemu2 VM for "no-preload-436000" ...
	I0718 21:25:56.482122    8127 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:25:56.482330    8127 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:18:17:45:2f:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2
	I0718 21:25:56.491852    8127 main.go:141] libmachine: STDOUT: 
	I0718 21:25:56.491939    8127 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:25:56.492072    8127 fix.go:56] duration metric: took 31.374792ms for fixHost
	I0718 21:25:56.492098    8127 start.go:83] releasing machines lock for "no-preload-436000", held for 31.565041ms
	W0718 21:25:56.492130    8127 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:25:56.492288    8127 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:25:56.492303    8127 start.go:729] Will try again in 5 seconds ...
	I0718 21:26:01.494362    8127 start.go:360] acquireMachinesLock for no-preload-436000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:26:01.494748    8127 start.go:364] duration metric: took 298.459µs to acquireMachinesLock for "no-preload-436000"
	I0718 21:26:01.494885    8127 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:26:01.494907    8127 fix.go:54] fixHost starting: 
	I0718 21:26:01.495671    8127 fix.go:112] recreateIfNeeded on no-preload-436000: state=Stopped err=<nil>
	W0718 21:26:01.495696    8127 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:26:01.501280    8127 out.go:177] * Restarting existing qemu2 VM for "no-preload-436000" ...
	I0718 21:26:01.508191    8127 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:26:01.508374    8127 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:18:17:45:2f:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/no-preload-436000/disk.qcow2
	I0718 21:26:01.517586    8127 main.go:141] libmachine: STDOUT: 
	I0718 21:26:01.517664    8127 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:26:01.517743    8127 fix.go:56] duration metric: took 22.835916ms for fixHost
	I0718 21:26:01.517765    8127 start.go:83] releasing machines lock for "no-preload-436000", held for 22.98025ms
	W0718 21:26:01.517961    8127 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-436000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-436000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:01.524119    8127 out.go:177] 
	W0718 21:26:01.528300    8127 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:26:01.528323    8127 out.go:239] * 
	* 
	W0718 21:26:01.531303    8127 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:26:01.539200    8127 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-436000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (64.752541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-489000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-489000 create -f testdata/busybox.yaml: exit status 1 (29.600583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-489000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-489000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (28.498667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-489000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (28.823709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-489000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-489000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-489000 describe deploy/metrics-server -n kube-system: exit status 1 (27.104417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-489000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-489000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (27.789834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-489000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-489000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.190817708s)

                                                
                                                
-- stdout --
	* [embed-certs-489000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-489000" primary control-plane node in "embed-certs-489000" cluster
	* Restarting existing qemu2 VM for "embed-certs-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:26:00.164180    8169 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:26:00.164535    8169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:00.164544    8169 out.go:304] Setting ErrFile to fd 2...
	I0718 21:26:00.164548    8169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:00.164733    8169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:26:00.166022    8169 out.go:298] Setting JSON to false
	I0718 21:26:00.182196    8169 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5128,"bootTime":1721358032,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:26:00.182261    8169 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:26:00.186119    8169 out.go:177] * [embed-certs-489000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:26:00.193080    8169 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:26:00.193125    8169 notify.go:220] Checking for updates...
	I0718 21:26:00.200049    8169 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:26:00.203107    8169 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:26:00.206136    8169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:26:00.209091    8169 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:26:00.212111    8169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:26:00.215271    8169 config.go:182] Loaded profile config "embed-certs-489000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:26:00.215550    8169 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:26:00.220040    8169 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 21:26:00.226080    8169 start.go:297] selected driver: qemu2
	I0718 21:26:00.226088    8169 start.go:901] validating driver "qemu2" against &{Name:embed-certs-489000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-489000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:26:00.226161    8169 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:26:00.228352    8169 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:26:00.228416    8169 cni.go:84] Creating CNI manager for ""
	I0718 21:26:00.228423    8169 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:26:00.228452    8169 start.go:340] cluster config:
	{Name:embed-certs-489000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-489000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:26:00.231967    8169 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:26:00.240066    8169 out.go:177] * Starting "embed-certs-489000" primary control-plane node in "embed-certs-489000" cluster
	I0718 21:26:00.243998    8169 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:26:00.244020    8169 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:26:00.244033    8169 cache.go:56] Caching tarball of preloaded images
	I0718 21:26:00.244105    8169 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:26:00.244111    8169 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:26:00.244168    8169 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/embed-certs-489000/config.json ...
	I0718 21:26:00.244580    8169 start.go:360] acquireMachinesLock for embed-certs-489000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:26:00.244609    8169 start.go:364] duration metric: took 22.5µs to acquireMachinesLock for "embed-certs-489000"
	I0718 21:26:00.244617    8169 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:26:00.244622    8169 fix.go:54] fixHost starting: 
	I0718 21:26:00.244754    8169 fix.go:112] recreateIfNeeded on embed-certs-489000: state=Stopped err=<nil>
	W0718 21:26:00.244763    8169 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:26:00.251978    8169 out.go:177] * Restarting existing qemu2 VM for "embed-certs-489000" ...
	I0718 21:26:00.256005    8169 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:26:00.256044    8169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:24:06:1d:92:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2
	I0718 21:26:00.258123    8169 main.go:141] libmachine: STDOUT: 
	I0718 21:26:00.258143    8169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:26:00.258179    8169 fix.go:56] duration metric: took 13.557125ms for fixHost
	I0718 21:26:00.258183    8169 start.go:83] releasing machines lock for "embed-certs-489000", held for 13.570542ms
	W0718 21:26:00.258190    8169 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:26:00.258221    8169 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:00.258226    8169 start.go:729] Will try again in 5 seconds ...
	I0718 21:26:05.260351    8169 start.go:360] acquireMachinesLock for embed-certs-489000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:26:05.260791    8169 start.go:364] duration metric: took 326.916µs to acquireMachinesLock for "embed-certs-489000"
	I0718 21:26:05.260914    8169 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:26:05.260933    8169 fix.go:54] fixHost starting: 
	I0718 21:26:05.261723    8169 fix.go:112] recreateIfNeeded on embed-certs-489000: state=Stopped err=<nil>
	W0718 21:26:05.261749    8169 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:26:05.276922    8169 out.go:177] * Restarting existing qemu2 VM for "embed-certs-489000" ...
	I0718 21:26:05.280725    8169 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:26:05.281033    8169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:24:06:1d:92:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/embed-certs-489000/disk.qcow2
	I0718 21:26:05.290498    8169 main.go:141] libmachine: STDOUT: 
	I0718 21:26:05.290567    8169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:26:05.290635    8169 fix.go:56] duration metric: took 29.700083ms for fixHost
	I0718 21:26:05.290648    8169 start.go:83] releasing machines lock for "embed-certs-489000", held for 29.837792ms
	W0718 21:26:05.290908    8169 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-489000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-489000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:05.299672    8169 out.go:177] 
	W0718 21:26:05.302860    8169 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:26:05.302911    8169 out.go:239] * 
	* 
	W0718 21:26:05.305999    8169 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:26:05.314614    8169 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-489000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (64.352417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-436000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (31.888708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-436000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-436000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-436000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.761083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-436000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-436000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (28.211292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-436000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (29.403208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-436000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-436000 --alsologtostderr -v=1: exit status 83 (39.066291ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-436000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-436000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:26:01.802778    8188 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:26:01.802931    8188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:01.802934    8188 out.go:304] Setting ErrFile to fd 2...
	I0718 21:26:01.802937    8188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:01.803071    8188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:26:01.803306    8188 out.go:298] Setting JSON to false
	I0718 21:26:01.803313    8188 mustload.go:65] Loading cluster: no-preload-436000
	I0718 21:26:01.803539    8188 config.go:182] Loaded profile config "no-preload-436000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0718 21:26:01.807146    8188 out.go:177] * The control-plane node no-preload-436000 host is not running: state=Stopped
	I0718 21:26:01.810191    8188 out.go:177]   To start a cluster, run: "minikube start -p no-preload-436000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-436000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (28.37575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-436000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (28.050833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-436000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-167000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-167000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.864455209s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-167000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-167000" primary control-plane node in "default-k8s-diff-port-167000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-167000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:26:02.214055    8212 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:26:02.214219    8212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:02.214222    8212 out.go:304] Setting ErrFile to fd 2...
	I0718 21:26:02.214224    8212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:02.214370    8212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:26:02.215501    8212 out.go:298] Setting JSON to false
	I0718 21:26:02.231604    8212 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5130,"bootTime":1721358032,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:26:02.231675    8212 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:26:02.235344    8212 out.go:177] * [default-k8s-diff-port-167000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:26:02.241146    8212 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:26:02.241179    8212 notify.go:220] Checking for updates...
	I0718 21:26:02.248192    8212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:26:02.251265    8212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:26:02.254240    8212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:26:02.255593    8212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:26:02.258201    8212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:26:02.261530    8212 config.go:182] Loaded profile config "embed-certs-489000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:26:02.261584    8212 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:26:02.261634    8212 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:26:02.265987    8212 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:26:02.273217    8212 start.go:297] selected driver: qemu2
	I0718 21:26:02.273223    8212 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:26:02.273229    8212 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:26:02.275566    8212 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:26:02.278187    8212 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:26:02.281309    8212 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:26:02.281349    8212 cni.go:84] Creating CNI manager for ""
	I0718 21:26:02.281357    8212 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:26:02.281360    8212 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:26:02.281390    8212 start.go:340] cluster config:
	{Name:default-k8s-diff-port-167000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-167000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:26:02.284961    8212 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:26:02.291179    8212 out.go:177] * Starting "default-k8s-diff-port-167000" primary control-plane node in "default-k8s-diff-port-167000" cluster
	I0718 21:26:02.299275    8212 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:26:02.299290    8212 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:26:02.299300    8212 cache.go:56] Caching tarball of preloaded images
	I0718 21:26:02.299364    8212 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:26:02.299370    8212 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:26:02.299428    8212 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/default-k8s-diff-port-167000/config.json ...
	I0718 21:26:02.299440    8212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/default-k8s-diff-port-167000/config.json: {Name:mk634abe46a5f4da262c34ad93544dd20478bcea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:26:02.299655    8212 start.go:360] acquireMachinesLock for default-k8s-diff-port-167000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:26:02.299688    8212 start.go:364] duration metric: took 26.75µs to acquireMachinesLock for "default-k8s-diff-port-167000"
	I0718 21:26:02.299700    8212 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-167000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-167000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:26:02.299739    8212 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:26:02.307174    8212 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:26:02.325129    8212 start.go:159] libmachine.API.Create for "default-k8s-diff-port-167000" (driver="qemu2")
	I0718 21:26:02.325154    8212 client.go:168] LocalClient.Create starting
	I0718 21:26:02.325221    8212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:26:02.325253    8212 main.go:141] libmachine: Decoding PEM data...
	I0718 21:26:02.325261    8212 main.go:141] libmachine: Parsing certificate...
	I0718 21:26:02.325302    8212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:26:02.325325    8212 main.go:141] libmachine: Decoding PEM data...
	I0718 21:26:02.325335    8212 main.go:141] libmachine: Parsing certificate...
	I0718 21:26:02.325683    8212 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:26:02.467444    8212 main.go:141] libmachine: Creating SSH key...
	I0718 21:26:02.571088    8212 main.go:141] libmachine: Creating Disk image...
	I0718 21:26:02.571098    8212 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:26:02.571273    8212 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2
	I0718 21:26:02.580414    8212 main.go:141] libmachine: STDOUT: 
	I0718 21:26:02.580432    8212 main.go:141] libmachine: STDERR: 
	I0718 21:26:02.580485    8212 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2 +20000M
	I0718 21:26:02.588496    8212 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:26:02.588518    8212 main.go:141] libmachine: STDERR: 
	I0718 21:26:02.588530    8212 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2
	I0718 21:26:02.588537    8212 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:26:02.588543    8212 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:26:02.588569    8212 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:fe:98:ac:09:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2
	I0718 21:26:02.590236    8212 main.go:141] libmachine: STDOUT: 
	I0718 21:26:02.590250    8212 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:26:02.590273    8212 client.go:171] duration metric: took 265.122167ms to LocalClient.Create
	I0718 21:26:04.592394    8212 start.go:128] duration metric: took 2.292693708s to createHost
	I0718 21:26:04.592457    8212 start.go:83] releasing machines lock for "default-k8s-diff-port-167000", held for 2.292819167s
	W0718 21:26:04.592521    8212 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:04.606841    8212 out.go:177] * Deleting "default-k8s-diff-port-167000" in qemu2 ...
	W0718 21:26:04.631703    8212 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:04.631731    8212 start.go:729] Will try again in 5 seconds ...
	I0718 21:26:09.632385    8212 start.go:360] acquireMachinesLock for default-k8s-diff-port-167000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:26:09.632962    8212 start.go:364] duration metric: took 435.25µs to acquireMachinesLock for "default-k8s-diff-port-167000"
	I0718 21:26:09.633125    8212 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-167000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-167000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:26:09.633415    8212 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:26:09.642757    8212 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:26:09.695487    8212 start.go:159] libmachine.API.Create for "default-k8s-diff-port-167000" (driver="qemu2")
	I0718 21:26:09.695546    8212 client.go:168] LocalClient.Create starting
	I0718 21:26:09.695676    8212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:26:09.695741    8212 main.go:141] libmachine: Decoding PEM data...
	I0718 21:26:09.695762    8212 main.go:141] libmachine: Parsing certificate...
	I0718 21:26:09.695836    8212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:26:09.695882    8212 main.go:141] libmachine: Decoding PEM data...
	I0718 21:26:09.695893    8212 main.go:141] libmachine: Parsing certificate...
	I0718 21:26:09.696509    8212 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:26:09.840308    8212 main.go:141] libmachine: Creating SSH key...
	I0718 21:26:09.990019    8212 main.go:141] libmachine: Creating Disk image...
	I0718 21:26:09.990026    8212 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:26:09.990202    8212 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2
	I0718 21:26:09.999914    8212 main.go:141] libmachine: STDOUT: 
	I0718 21:26:09.999935    8212 main.go:141] libmachine: STDERR: 
	I0718 21:26:09.999994    8212 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2 +20000M
	I0718 21:26:10.007914    8212 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:26:10.007936    8212 main.go:141] libmachine: STDERR: 
	I0718 21:26:10.007948    8212 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2
	I0718 21:26:10.007957    8212 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:26:10.007970    8212 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:26:10.007992    8212 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:a3:bb:ab:2f:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2
	I0718 21:26:10.009625    8212 main.go:141] libmachine: STDOUT: 
	I0718 21:26:10.009640    8212 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:26:10.009652    8212 client.go:171] duration metric: took 314.109875ms to LocalClient.Create
	I0718 21:26:12.011776    8212 start.go:128] duration metric: took 2.378381208s to createHost
	I0718 21:26:12.011846    8212 start.go:83] releasing machines lock for "default-k8s-diff-port-167000", held for 2.378922625s
	W0718 21:26:12.012136    8212 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-167000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-167000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:12.021867    8212 out.go:177] 
	W0718 21:26:12.024833    8212 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:26:12.024909    8212 out.go:239] * 
	* 
	W0718 21:26:12.027477    8212 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:26:12.035790    8212 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-167000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000: exit status 7 (64.298375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-489000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (31.491084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-489000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-489000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-489000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.709334ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-489000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-489000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (29.285791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-489000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (29.517959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-489000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-489000 --alsologtostderr -v=1: exit status 83 (40.008458ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-489000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-489000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:26:05.576783    8236 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:26:05.576937    8236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:05.576941    8236 out.go:304] Setting ErrFile to fd 2...
	I0718 21:26:05.576943    8236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:05.577081    8236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:26:05.577341    8236 out.go:298] Setting JSON to false
	I0718 21:26:05.577348    8236 mustload.go:65] Loading cluster: embed-certs-489000
	I0718 21:26:05.577552    8236 config.go:182] Loaded profile config "embed-certs-489000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:26:05.581621    8236 out.go:177] * The control-plane node embed-certs-489000 host is not running: state=Stopped
	I0718 21:26:05.585616    8236 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-489000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-489000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (28.995958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-489000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (28.265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-701000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-701000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.773073083s)

                                                
                                                
-- stdout --
	* [newest-cni-701000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-701000" primary control-plane node in "newest-cni-701000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-701000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:26:05.881187    8253 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:26:05.881316    8253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:05.881319    8253 out.go:304] Setting ErrFile to fd 2...
	I0718 21:26:05.881322    8253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:05.881448    8253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:26:05.882522    8253 out.go:298] Setting JSON to false
	I0718 21:26:05.898657    8253 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5133,"bootTime":1721358032,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:26:05.898725    8253 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:26:05.903637    8253 out.go:177] * [newest-cni-701000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:26:05.910547    8253 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:26:05.910627    8253 notify.go:220] Checking for updates...
	I0718 21:26:05.917610    8253 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:26:05.920566    8253 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:26:05.923640    8253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:26:05.926797    8253 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:26:05.929615    8253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:26:05.932886    8253 config.go:182] Loaded profile config "default-k8s-diff-port-167000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:26:05.932950    8253 config.go:182] Loaded profile config "multinode-024000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:26:05.933002    8253 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:26:05.937589    8253 out.go:177] * Using the qemu2 driver based on user configuration
	I0718 21:26:05.944554    8253 start.go:297] selected driver: qemu2
	I0718 21:26:05.944560    8253 start.go:901] validating driver "qemu2" against <nil>
	I0718 21:26:05.944566    8253 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:26:05.946986    8253 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0718 21:26:05.947008    8253 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0718 21:26:05.954613    8253 out.go:177] * Automatically selected the socket_vmnet network
	I0718 21:26:05.957733    8253 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0718 21:26:05.957774    8253 cni.go:84] Creating CNI manager for ""
	I0718 21:26:05.957782    8253 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:26:05.957786    8253 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:26:05.957818    8253 start.go:340] cluster config:
	{Name:newest-cni-701000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:26:05.961633    8253 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:26:05.969586    8253 out.go:177] * Starting "newest-cni-701000" primary control-plane node in "newest-cni-701000" cluster
	I0718 21:26:05.973612    8253 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 21:26:05.973627    8253 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0718 21:26:05.973639    8253 cache.go:56] Caching tarball of preloaded images
	I0718 21:26:05.973718    8253 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:26:05.973724    8253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0718 21:26:05.973825    8253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/newest-cni-701000/config.json ...
	I0718 21:26:05.973840    8253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/newest-cni-701000/config.json: {Name:mk87ce1e36e2ff7de0eaa45ac844a2e226cf0891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:26:05.974254    8253 start.go:360] acquireMachinesLock for newest-cni-701000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:26:05.974296    8253 start.go:364] duration metric: took 33.584µs to acquireMachinesLock for "newest-cni-701000"
	I0718 21:26:05.974309    8253 start.go:93] Provisioning new machine with config: &{Name:newest-cni-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:26:05.974345    8253 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:26:05.981632    8253 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:26:06.000762    8253 start.go:159] libmachine.API.Create for "newest-cni-701000" (driver="qemu2")
	I0718 21:26:06.000792    8253 client.go:168] LocalClient.Create starting
	I0718 21:26:06.000853    8253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:26:06.000890    8253 main.go:141] libmachine: Decoding PEM data...
	I0718 21:26:06.000901    8253 main.go:141] libmachine: Parsing certificate...
	I0718 21:26:06.000939    8253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:26:06.000971    8253 main.go:141] libmachine: Decoding PEM data...
	I0718 21:26:06.000978    8253 main.go:141] libmachine: Parsing certificate...
	I0718 21:26:06.001378    8253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:26:06.132682    8253 main.go:141] libmachine: Creating SSH key...
	I0718 21:26:06.233303    8253 main.go:141] libmachine: Creating Disk image...
	I0718 21:26:06.233313    8253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:26:06.233481    8253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2
	I0718 21:26:06.242855    8253 main.go:141] libmachine: STDOUT: 
	I0718 21:26:06.242873    8253 main.go:141] libmachine: STDERR: 
	I0718 21:26:06.242935    8253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2 +20000M
	I0718 21:26:06.250819    8253 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:26:06.250838    8253 main.go:141] libmachine: STDERR: 
	I0718 21:26:06.250852    8253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2
	I0718 21:26:06.250861    8253 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:26:06.250875    8253 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:26:06.250903    8253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:33:15:6d:85:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2
	I0718 21:26:06.252528    8253 main.go:141] libmachine: STDOUT: 
	I0718 21:26:06.252543    8253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:26:06.252560    8253 client.go:171] duration metric: took 251.771666ms to LocalClient.Create
	I0718 21:26:08.254673    8253 start.go:128] duration metric: took 2.280368042s to createHost
	I0718 21:26:08.254729    8253 start.go:83] releasing machines lock for "newest-cni-701000", held for 2.280482833s
	W0718 21:26:08.254800    8253 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:08.262803    8253 out.go:177] * Deleting "newest-cni-701000" in qemu2 ...
	W0718 21:26:08.287095    8253 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:08.287126    8253 start.go:729] Will try again in 5 seconds ...
	I0718 21:26:13.289150    8253 start.go:360] acquireMachinesLock for newest-cni-701000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:26:13.289590    8253 start.go:364] duration metric: took 358.042µs to acquireMachinesLock for "newest-cni-701000"
	I0718 21:26:13.289754    8253 start.go:93] Provisioning new machine with config: &{Name:newest-cni-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:26:13.290027    8253 start.go:125] createHost starting for "" (driver="qemu2")
	I0718 21:26:13.299645    8253 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0718 21:26:13.352770    8253 start.go:159] libmachine.API.Create for "newest-cni-701000" (driver="qemu2")
	I0718 21:26:13.352821    8253 client.go:168] LocalClient.Create starting
	I0718 21:26:13.352931    8253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/ca.pem
	I0718 21:26:13.352988    8253 main.go:141] libmachine: Decoding PEM data...
	I0718 21:26:13.353004    8253 main.go:141] libmachine: Parsing certificate...
	I0718 21:26:13.353082    8253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1213/.minikube/certs/cert.pem
	I0718 21:26:13.353112    8253 main.go:141] libmachine: Decoding PEM data...
	I0718 21:26:13.353149    8253 main.go:141] libmachine: Parsing certificate...
	I0718 21:26:13.353737    8253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0718 21:26:13.511128    8253 main.go:141] libmachine: Creating SSH key...
	I0718 21:26:13.556584    8253 main.go:141] libmachine: Creating Disk image...
	I0718 21:26:13.556589    8253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0718 21:26:13.556770    8253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2
	I0718 21:26:13.566121    8253 main.go:141] libmachine: STDOUT: 
	I0718 21:26:13.566141    8253 main.go:141] libmachine: STDERR: 
	I0718 21:26:13.566187    8253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2 +20000M
	I0718 21:26:13.574053    8253 main.go:141] libmachine: STDOUT: Image resized.
	
	I0718 21:26:13.574072    8253 main.go:141] libmachine: STDERR: 
	I0718 21:26:13.574083    8253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2
	I0718 21:26:13.574088    8253 main.go:141] libmachine: Starting QEMU VM...
	I0718 21:26:13.574095    8253 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:26:13.574131    8253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:ca:22:c6:9f:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2
	I0718 21:26:13.575754    8253 main.go:141] libmachine: STDOUT: 
	I0718 21:26:13.575789    8253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:26:13.575802    8253 client.go:171] duration metric: took 222.980042ms to LocalClient.Create
	I0718 21:26:15.577948    8253 start.go:128] duration metric: took 2.287948292s to createHost
	I0718 21:26:15.578005    8253 start.go:83] releasing machines lock for "newest-cni-701000", held for 2.288452583s
	W0718 21:26:15.578303    8253 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-701000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-701000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:15.592795    8253 out.go:177] 
	W0718 21:26:15.599940    8253 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:26:15.599990    8253 out.go:239] * 
	* 
	W0718 21:26:15.602238    8253 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:26:15.609763    8253 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-701000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000: exit status 7 (67.495709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-167000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-167000 create -f testdata/busybox.yaml: exit status 1 (30.398833ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-167000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-167000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000: exit status 7 (28.334416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000: exit status 7 (28.591458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-167000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-167000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-167000 describe deploy/metrics-server -n kube-system: exit status 1 (26.624792ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-167000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-167000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000: exit status 7 (28.500333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-167000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-167000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.179230167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-167000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-167000" primary control-plane node in "default-k8s-diff-port-167000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-167000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-167000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:26:14.528495    8302 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:26:14.528623    8302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:14.528627    8302 out.go:304] Setting ErrFile to fd 2...
	I0718 21:26:14.528629    8302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:14.528761    8302 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:26:14.529816    8302 out.go:298] Setting JSON to false
	I0718 21:26:14.545792    8302 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5142,"bootTime":1721358032,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:26:14.545861    8302 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:26:14.550487    8302 out.go:177] * [default-k8s-diff-port-167000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:26:14.557513    8302 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:26:14.557577    8302 notify.go:220] Checking for updates...
	I0718 21:26:14.564508    8302 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:26:14.567443    8302 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:26:14.570502    8302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:26:14.573521    8302 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:26:14.576513    8302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:26:14.579796    8302 config.go:182] Loaded profile config "default-k8s-diff-port-167000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:26:14.580049    8302 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:26:14.584424    8302 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 21:26:14.591466    8302 start.go:297] selected driver: qemu2
	I0718 21:26:14.591472    8302 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-167000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-167000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:26:14.591519    8302 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:26:14.593775    8302 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:26:14.593841    8302 cni.go:84] Creating CNI manager for ""
	I0718 21:26:14.593851    8302 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:26:14.593876    8302 start.go:340] cluster config:
	{Name:default-k8s-diff-port-167000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-167000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:26:14.597338    8302 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:26:14.604405    8302 out.go:177] * Starting "default-k8s-diff-port-167000" primary control-plane node in "default-k8s-diff-port-167000" cluster
	I0718 21:26:14.608522    8302 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:26:14.608540    8302 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 21:26:14.608558    8302 cache.go:56] Caching tarball of preloaded images
	I0718 21:26:14.608628    8302 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:26:14.608634    8302 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:26:14.608700    8302 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/default-k8s-diff-port-167000/config.json ...
	I0718 21:26:14.609123    8302 start.go:360] acquireMachinesLock for default-k8s-diff-port-167000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:26:15.578176    8302 start.go:364] duration metric: took 968.986709ms to acquireMachinesLock for "default-k8s-diff-port-167000"
	I0718 21:26:15.578343    8302 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:26:15.578378    8302 fix.go:54] fixHost starting: 
	I0718 21:26:15.579050    8302 fix.go:112] recreateIfNeeded on default-k8s-diff-port-167000: state=Stopped err=<nil>
	W0718 21:26:15.579091    8302 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:26:15.596874    8302 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-167000" ...
	I0718 21:26:15.602868    8302 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:26:15.603068    8302 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:a3:bb:ab:2f:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2
	I0718 21:26:15.613527    8302 main.go:141] libmachine: STDOUT: 
	I0718 21:26:15.613611    8302 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:26:15.613743    8302 fix.go:56] duration metric: took 35.356959ms for fixHost
	I0718 21:26:15.613762    8302 start.go:83] releasing machines lock for "default-k8s-diff-port-167000", held for 35.55375ms
	W0718 21:26:15.613806    8302 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:26:15.613986    8302 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:15.614011    8302 start.go:729] Will try again in 5 seconds ...
	I0718 21:26:20.616091    8302 start.go:360] acquireMachinesLock for default-k8s-diff-port-167000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:26:20.616578    8302 start.go:364] duration metric: took 285.375µs to acquireMachinesLock for "default-k8s-diff-port-167000"
	I0718 21:26:20.616706    8302 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:26:20.616726    8302 fix.go:54] fixHost starting: 
	I0718 21:26:20.617471    8302 fix.go:112] recreateIfNeeded on default-k8s-diff-port-167000: state=Stopped err=<nil>
	W0718 21:26:20.617498    8302 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:26:20.627096    8302 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-167000" ...
	I0718 21:26:20.631118    8302 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:26:20.631379    8302 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:a3:bb:ab:2f:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/default-k8s-diff-port-167000/disk.qcow2
	I0718 21:26:20.640563    8302 main.go:141] libmachine: STDOUT: 
	I0718 21:26:20.640609    8302 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:26:20.640678    8302 fix.go:56] duration metric: took 23.956542ms for fixHost
	I0718 21:26:20.640694    8302 start.go:83] releasing machines lock for "default-k8s-diff-port-167000", held for 24.091834ms
	W0718 21:26:20.640830    8302 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-167000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-167000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:20.648092    8302 out.go:177] 
	W0718 21:26:20.655876    8302 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:26:20.655914    8302 out.go:239] * 
	* 
	W0718 21:26:20.658570    8302 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:26:20.666130    8302 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-167000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000: exit status 7 (65.527792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-701000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-701000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.180542334s)

                                                
                                                
-- stdout --
	* [newest-cni-701000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-701000" primary control-plane node in "newest-cni-701000" cluster
	* Restarting existing qemu2 VM for "newest-cni-701000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-701000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:26:19.430853    8335 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:26:19.430985    8335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:19.430988    8335 out.go:304] Setting ErrFile to fd 2...
	I0718 21:26:19.430990    8335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:19.431122    8335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:26:19.432134    8335 out.go:298] Setting JSON to false
	I0718 21:26:19.447969    8335 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5147,"bootTime":1721358032,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 21:26:19.448034    8335 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:26:19.452780    8335 out.go:177] * [newest-cni-701000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 21:26:19.459915    8335 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:26:19.459975    8335 notify.go:220] Checking for updates...
	I0718 21:26:19.465890    8335 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 21:26:19.468935    8335 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 21:26:19.470336    8335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:26:19.472888    8335 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 21:26:19.475943    8335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:26:19.479284    8335 config.go:182] Loaded profile config "newest-cni-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0718 21:26:19.479556    8335 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:26:19.483802    8335 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 21:26:19.490965    8335 start.go:297] selected driver: qemu2
	I0718 21:26:19.490971    8335 start.go:901] validating driver "qemu2" against &{Name:newest-cni-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:26:19.491018    8335 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:26:19.493394    8335 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0718 21:26:19.493435    8335 cni.go:84] Creating CNI manager for ""
	I0718 21:26:19.493443    8335 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:26:19.493489    8335 start.go:340] cluster config:
	{Name:newest-cni-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-701000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:26:19.496983    8335 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:26:19.504856    8335 out.go:177] * Starting "newest-cni-701000" primary control-plane node in "newest-cni-701000" cluster
	I0718 21:26:19.508940    8335 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 21:26:19.508955    8335 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0718 21:26:19.508970    8335 cache.go:56] Caching tarball of preloaded images
	I0718 21:26:19.509028    8335 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0718 21:26:19.509033    8335 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0718 21:26:19.509092    8335 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/newest-cni-701000/config.json ...
	I0718 21:26:19.509474    8335 start.go:360] acquireMachinesLock for newest-cni-701000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:26:19.509499    8335 start.go:364] duration metric: took 19.416µs to acquireMachinesLock for "newest-cni-701000"
	I0718 21:26:19.509506    8335 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:26:19.509512    8335 fix.go:54] fixHost starting: 
	I0718 21:26:19.509618    8335 fix.go:112] recreateIfNeeded on newest-cni-701000: state=Stopped err=<nil>
	W0718 21:26:19.509625    8335 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:26:19.512958    8335 out.go:177] * Restarting existing qemu2 VM for "newest-cni-701000" ...
	I0718 21:26:19.520938    8335 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:26:19.520984    8335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:ca:22:c6:9f:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2
	I0718 21:26:19.522935    8335 main.go:141] libmachine: STDOUT: 
	I0718 21:26:19.522952    8335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:26:19.522979    8335 fix.go:56] duration metric: took 13.466625ms for fixHost
	I0718 21:26:19.522984    8335 start.go:83] releasing machines lock for "newest-cni-701000", held for 13.482375ms
	W0718 21:26:19.522990    8335 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:26:19.523031    8335 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:19.523035    8335 start.go:729] Will try again in 5 seconds ...
	I0718 21:26:24.525130    8335 start.go:360] acquireMachinesLock for newest-cni-701000: {Name:mke6e05033a882b4bc3dbe7043b11fc0f366010a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:26:24.525632    8335 start.go:364] duration metric: took 387.583µs to acquireMachinesLock for "newest-cni-701000"
	I0718 21:26:24.525775    8335 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:26:24.525795    8335 fix.go:54] fixHost starting: 
	I0718 21:26:24.526620    8335 fix.go:112] recreateIfNeeded on newest-cni-701000: state=Stopped err=<nil>
	W0718 21:26:24.526644    8335 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:26:24.531097    8335 out.go:177] * Restarting existing qemu2 VM for "newest-cni-701000" ...
	I0718 21:26:24.539087    8335 qemu.go:418] Using hvf for hardware acceleration
	I0718 21:26:24.539302    8335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:ca:22:c6:9f:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-1213/.minikube/machines/newest-cni-701000/disk.qcow2
	I0718 21:26:24.549131    8335 main.go:141] libmachine: STDOUT: 
	I0718 21:26:24.549207    8335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0718 21:26:24.549282    8335 fix.go:56] duration metric: took 23.491042ms for fixHost
	I0718 21:26:24.549300    8335 start.go:83] releasing machines lock for "newest-cni-701000", held for 23.644042ms
	W0718 21:26:24.549451    8335 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-701000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-701000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0718 21:26:24.557897    8335 out.go:177] 
	W0718 21:26:24.561131    8335 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0718 21:26:24.561183    8335 out.go:239] * 
	* 
	W0718 21:26:24.563833    8335 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:26:24.575082    8335 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-701000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000: exit status 7 (68.332375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-167000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000: exit status 7 (31.595458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-167000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-167000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-167000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.377958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-167000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-167000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000: exit status 7 (28.893333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-167000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000: exit status 7 (28.939292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-167000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-167000 --alsologtostderr -v=1: exit status 83 (41.758542ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-167000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-167000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:26:20.931108    8354 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:26:20.931237    8354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:20.931241    8354 out.go:304] Setting ErrFile to fd 2...
	I0718 21:26:20.931243    8354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:20.931379    8354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:26:20.931589    8354 out.go:298] Setting JSON to false
	I0718 21:26:20.931596    8354 mustload.go:65] Loading cluster: default-k8s-diff-port-167000
	I0718 21:26:20.931781    8354 config.go:182] Loaded profile config "default-k8s-diff-port-167000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:26:20.936433    8354 out.go:177] * The control-plane node default-k8s-diff-port-167000 host is not running: state=Stopped
	I0718 21:26:20.940620    8354 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-167000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-167000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000: exit status 7 (28.39ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000: exit status 7 (28.534583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-167000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-701000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000: exit status 7 (29.828459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-701000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-701000 --alsologtostderr -v=1: exit status 83 (40.779333ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-701000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-701000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:26:24.754753    8378 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:26:24.754907    8378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:24.754910    8378 out.go:304] Setting ErrFile to fd 2...
	I0718 21:26:24.754913    8378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:26:24.755043    8378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 21:26:24.755268    8378 out.go:298] Setting JSON to false
	I0718 21:26:24.755277    8378 mustload.go:65] Loading cluster: newest-cni-701000
	I0718 21:26:24.755484    8378 config.go:182] Loaded profile config "newest-cni-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0718 21:26:24.760207    8378 out.go:177] * The control-plane node newest-cni-701000 host is not running: state=Stopped
	I0718 21:26:24.763122    8378 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-701000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-701000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000: exit status 7 (29.103209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-701000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000: exit status 7 (28.936375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (156/275)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 14.33
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.1
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 13.44
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.33
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 209.58
38 TestAddons/parallel/Registry 13.95
39 TestAddons/parallel/Ingress 17.24
40 TestAddons/parallel/InspektorGadget 10.23
41 TestAddons/parallel/MetricsServer 5.26
44 TestAddons/parallel/CSI 43.66
45 TestAddons/parallel/Headlamp 13.42
46 TestAddons/parallel/CloudSpanner 5.17
47 TestAddons/parallel/LocalPath 45.8
48 TestAddons/parallel/NvidiaDevicePlugin 5.16
49 TestAddons/parallel/Yakd 6
50 TestAddons/parallel/Volcano 38.83
53 TestAddons/serial/GCPAuth/Namespaces 0.1
54 TestAddons/StoppedEnableDisable 12.4
62 TestHyperKitDriverInstallOrUpdate 10.28
65 TestErrorSpam/setup 37.7
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.24
68 TestErrorSpam/pause 0.67
69 TestErrorSpam/unpause 0.63
70 TestErrorSpam/stop 64.29
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 50.4
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.32
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.42
82 TestFunctional/serial/CacheCmd/cache/add_local 1.11
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.66
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.74
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.92
90 TestFunctional/serial/ExtraConfig 37.02
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.65
93 TestFunctional/serial/LogsFileCmd 0.65
94 TestFunctional/serial/InvalidService 4.19
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 9.11
98 TestFunctional/parallel/DryRun 0.23
99 TestFunctional/parallel/InternationalLanguage 0.11
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 25.46
108 TestFunctional/parallel/SSHCmd 0.13
109 TestFunctional/parallel/CpCmd 0.49
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.39
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.08
120 TestFunctional/parallel/License 0.21
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.19
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.72
128 TestFunctional/parallel/ImageCommands/Setup 1.77
129 TestFunctional/parallel/DockerEnv/bash 0.31
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
133 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.1
146 TestFunctional/parallel/ServiceCmd/List 0.08
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
149 TestFunctional/parallel/ServiceCmd/Format 0.09
150 TestFunctional/parallel/ServiceCmd/URL 0.1
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
158 TestFunctional/parallel/ProfileCmd/profile_list 0.12
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
160 TestFunctional/parallel/MountCmd/any-port 4.18
161 TestFunctional/parallel/MountCmd/specific-port 1.12
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
173 TestMultiControlPlane/serial/NodeLabels 0.04
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.67
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 76.97
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 3.71
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 0.2
214 TestMainNoArgs 0.03
261 TestStoppedBinaryUpgrade/Setup 0.91
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
278 TestNoKubernetes/serial/ProfileList 31.23
279 TestNoKubernetes/serial/Stop 3.19
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.72
296 TestStartStop/group/old-k8s-version/serial/Stop 3.67
297 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
309 TestStartStop/group/no-preload/serial/Stop 3.57
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/embed-certs/serial/Stop 3.24
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.07
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
334 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
336 TestStartStop/group/newest-cni/serial/Stop 3.52
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-065000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-065000: exit status 85 (91.264167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-065000 | jenkins | v1.33.1 | 18 Jul 24 20:24 PDT |          |
	|         | -p download-only-065000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:24:50
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:24:50.066480    1714 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:24:50.066633    1714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:24:50.066636    1714 out.go:304] Setting ErrFile to fd 2...
	I0718 20:24:50.066639    1714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:24:50.066798    1714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	W0718 20:24:50.066867    1714 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19302-1213/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19302-1213/.minikube/config/config.json: no such file or directory
	I0718 20:24:50.068104    1714 out.go:298] Setting JSON to true
	I0718 20:24:50.085380    1714 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1458,"bootTime":1721358032,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:24:50.085447    1714 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:24:50.091012    1714 out.go:97] [download-only-065000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 20:24:50.091122    1714 notify.go:220] Checking for updates...
	W0718 20:24:50.091140    1714 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball: no such file or directory
	I0718 20:24:50.094016    1714 out.go:169] MINIKUBE_LOCATION=19302
	I0718 20:24:50.096983    1714 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:24:50.102032    1714 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:24:50.105070    1714 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:24:50.108059    1714 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	W0718 20:24:50.114053    1714 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0718 20:24:50.114322    1714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:24:50.118887    1714 out.go:97] Using the qemu2 driver based on user configuration
	I0718 20:24:50.118906    1714 start.go:297] selected driver: qemu2
	I0718 20:24:50.118920    1714 start.go:901] validating driver "qemu2" against <nil>
	I0718 20:24:50.118987    1714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:24:50.121982    1714 out.go:169] Automatically selected the socket_vmnet network
	I0718 20:24:50.127692    1714 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0718 20:24:50.127808    1714 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 20:24:50.127860    1714 cni.go:84] Creating CNI manager for ""
	I0718 20:24:50.127877    1714 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0718 20:24:50.127933    1714 start.go:340] cluster config:
	{Name:download-only-065000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:24:50.133005    1714 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:24:50.137969    1714 out.go:97] Downloading VM boot image ...
	I0718 20:24:50.137982    1714 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso
	I0718 20:24:54.528665    1714 out.go:97] Starting "download-only-065000" primary control-plane node in "download-only-065000" cluster
	I0718 20:24:54.528684    1714 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 20:24:54.587890    1714 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0718 20:24:54.587912    1714 cache.go:56] Caching tarball of preloaded images
	I0718 20:24:54.588100    1714 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 20:24:54.593147    1714 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0718 20:24:54.593157    1714 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:24:54.678832    1714 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0718 20:24:59.602784    1714 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:24:59.602927    1714 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:25:00.298556    1714 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0718 20:25:00.298734    1714 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/download-only-065000/config.json ...
	I0718 20:25:00.298763    1714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/download-only-065000/config.json: {Name:mk1a7ebf572962433798bc760647481d0d78e6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:25:00.298986    1714 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 20:25:00.299253    1714 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0718 20:25:00.698645    1714 out.go:169] 
	W0718 20:25:00.704691    1714 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19302-1213/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106475a60 0x106475a60 0x106475a60 0x106475a60 0x106475a60 0x106475a60 0x106475a60] Decompressors:map[bz2:0x140006374e0 gz:0x140006374e8 tar:0x14000637490 tar.bz2:0x140006374a0 tar.gz:0x140006374b0 tar.xz:0x140006374c0 tar.zst:0x140006374d0 tbz2:0x140006374a0 tgz:0x140006374b0 txz:0x140006374c0 tzst:0x140006374d0 xz:0x140006374f0 zip:0x14000637500 zst:0x140006374f8] Getters:map[file:0x1400171a630 http:0x1400077a870 https:0x1400077a960] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0718 20:25:00.704719    1714 out_reason.go:110] 
	W0718 20:25:00.711653    1714 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 20:25:00.715605    1714 out.go:169] 
	
	
	* The control-plane node download-only-065000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-065000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-065000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (14.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-151000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-151000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (14.327277458s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (14.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-151000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-151000: exit status 85 (76.401625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-065000 | jenkins | v1.33.1 | 18 Jul 24 20:24 PDT |                     |
	|         | -p download-only-065000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| delete  | -p download-only-065000        | download-only-065000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| start   | -o=json --download-only        | download-only-151000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |                     |
	|         | -p download-only-151000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:25:01
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:25:01.115715    1740 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:25:01.115849    1740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:01.115852    1740 out.go:304] Setting ErrFile to fd 2...
	I0718 20:25:01.115854    1740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:01.115987    1740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:25:01.117044    1740 out.go:298] Setting JSON to true
	I0718 20:25:01.132946    1740 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1469,"bootTime":1721358032,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:25:01.133019    1740 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:25:01.137335    1740 out.go:97] [download-only-151000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 20:25:01.137455    1740 notify.go:220] Checking for updates...
	I0718 20:25:01.141292    1740 out.go:169] MINIKUBE_LOCATION=19302
	I0718 20:25:01.144360    1740 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:25:01.148397    1740 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:25:01.151354    1740 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:25:01.154327    1740 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	W0718 20:25:01.160304    1740 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0718 20:25:01.160471    1740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:25:01.163354    1740 out.go:97] Using the qemu2 driver based on user configuration
	I0718 20:25:01.163363    1740 start.go:297] selected driver: qemu2
	I0718 20:25:01.163367    1740 start.go:901] validating driver "qemu2" against <nil>
	I0718 20:25:01.163422    1740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:25:01.166321    1740 out.go:169] Automatically selected the socket_vmnet network
	I0718 20:25:01.171368    1740 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0718 20:25:01.171539    1740 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 20:25:01.171556    1740 cni.go:84] Creating CNI manager for ""
	I0718 20:25:01.171563    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 20:25:01.171568    1740 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 20:25:01.171595    1740 start.go:340] cluster config:
	{Name:download-only-151000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:25:01.175070    1740 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:25:01.178373    1740 out.go:97] Starting "download-only-151000" primary control-plane node in "download-only-151000" cluster
	I0718 20:25:01.178384    1740 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:25:01.234525    1740 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 20:25:01.234537    1740 cache.go:56] Caching tarball of preloaded images
	I0718 20:25:01.234669    1740 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:25:01.238777    1740 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0718 20:25:01.238785    1740 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:25:01.317551    1740 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0718 20:25:07.239210    1740 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:25:07.239378    1740 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:25:07.783329    1740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:25:07.783551    1740 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/download-only-151000/config.json ...
	I0718 20:25:07.783572    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/download-only-151000/config.json: {Name:mk0bb316ecd0aa07dca129124464d29088ed5d3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:25:07.783816    1740 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:25:07.783940    1740 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-151000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-151000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-151000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (13.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-980000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-980000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (13.442419875s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (13.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-980000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-980000: exit status 85 (76.001333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-065000 | jenkins | v1.33.1 | 18 Jul 24 20:24 PDT |                     |
	|         | -p download-only-065000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| delete  | -p download-only-065000             | download-only-065000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| start   | -o=json --download-only             | download-only-151000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |                     |
	|         | -p download-only-151000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| delete  | -p download-only-151000             | download-only-151000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| start   | -o=json --download-only             | download-only-980000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |                     |
	|         | -p download-only-980000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:25:15
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:25:15.721313    1766 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:25:15.721427    1766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:15.721431    1766 out.go:304] Setting ErrFile to fd 2...
	I0718 20:25:15.721433    1766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:15.721571    1766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:25:15.722586    1766 out.go:298] Setting JSON to true
	I0718 20:25:15.738320    1766 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1483,"bootTime":1721358032,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:25:15.738384    1766 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:25:15.743213    1766 out.go:97] [download-only-980000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 20:25:15.743365    1766 notify.go:220] Checking for updates...
	I0718 20:25:15.747188    1766 out.go:169] MINIKUBE_LOCATION=19302
	I0718 20:25:15.754173    1766 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:25:15.757241    1766 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:25:15.760177    1766 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:25:15.765444    1766 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	W0718 20:25:15.771179    1766 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0718 20:25:15.771364    1766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:25:15.774191    1766 out.go:97] Using the qemu2 driver based on user configuration
	I0718 20:25:15.774202    1766 start.go:297] selected driver: qemu2
	I0718 20:25:15.774206    1766 start.go:901] validating driver "qemu2" against <nil>
	I0718 20:25:15.774270    1766 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:25:15.777220    1766 out.go:169] Automatically selected the socket_vmnet network
	I0718 20:25:15.780514    1766 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0718 20:25:15.780660    1766 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 20:25:15.780699    1766 cni.go:84] Creating CNI manager for ""
	I0718 20:25:15.780707    1766 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 20:25:15.780714    1766 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 20:25:15.780767    1766 start.go:340] cluster config:
	{Name:download-only-980000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-980000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:25:15.784189    1766 iso.go:125] acquiring lock: {Name:mkfd3fc0fa00d8f420255e421d9befd9e0c5d7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:25:15.787143    1766 out.go:97] Starting "download-only-980000" primary control-plane node in "download-only-980000" cluster
	I0718 20:25:15.787151    1766 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 20:25:15.855277    1766 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0718 20:25:15.855294    1766 cache.go:56] Caching tarball of preloaded images
	I0718 20:25:15.855496    1766 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 20:25:15.859712    1766 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0718 20:25:15.859720    1766 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:25:15.942340    1766 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0718 20:25:20.298451    1766 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:25:20.298587    1766 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0718 20:25:20.818592    1766 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0718 20:25:20.818790    1766 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/download-only-980000/config.json ...
	I0718 20:25:20.818813    1766 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/download-only-980000/config.json: {Name:mkbc05700d55d5e0a8cfd6b1cec03b41345bedd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:25:20.819141    1766 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 20:25:20.819265    1766 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1213/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-980000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-980000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-980000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-368000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-368000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-368000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-786000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-786000: exit status 85 (59.309333ms)

                                                
                                                
-- stdout --
	* Profile "addons-786000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-786000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-786000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-786000: exit status 85 (55.579208ms)

                                                
                                                
-- stdout --
	* Profile "addons-786000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-786000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (209.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-786000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-786000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m29.577828708s)
--- PASS: TestAddons/Setup (209.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 8.848042ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-lwtfh" [a877071e-b0f3-4a1d-9774-a842a037ee0f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00414675s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hrgj7" [1b96b6c7-2720-4237-9022-229e17cf222f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00384575s
addons_test.go:342: (dbg) Run:  kubectl --context addons-786000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-786000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-786000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.628259791s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 ip
2024/07/18 20:29:13 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-786000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-786000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-786000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5467b334-4724-47ff-80d0-016e48549f25] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5467b334-4724-47ff-80d0-016e48549f25] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003497625s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-786000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-786000 addons disable ingress --alsologtostderr -v=1: (7.232925875s)
--- PASS: TestAddons/parallel/Ingress (17.24s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kb5sq" [dc0ac398-3ec8-4d9b-98b2-0816b93e2896] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004450125s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-786000
addons_test.go:843: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-786000: (5.227108083s)
--- PASS: TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.29975ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-fc22w" [e69b5b0a-28be-48d3-b16f-eb864265a572] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00408125s
addons_test.go:417: (dbg) Run:  kubectl --context addons-786000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 5.439667ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-786000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-786000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e78ec2a7-1db5-4d0b-a071-bf9d6626cbc0] Pending
helpers_test.go:344: "task-pv-pod" [e78ec2a7-1db5-4d0b-a071-bf9d6626cbc0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e78ec2a7-1db5-4d0b-a071-bf9d6626cbc0] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003725417s
addons_test.go:586: (dbg) Run:  kubectl --context addons-786000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-786000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-786000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-786000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-786000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-786000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-786000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8b221d3b-c0f3-4726-aa72-109c3eeb2ed7] Pending
helpers_test.go:344: "task-pv-pod-restore" [8b221d3b-c0f3-4726-aa72-109c3eeb2ed7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8b221d3b-c0f3-4726-aa72-109c3eeb2ed7] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003867334s
addons_test.go:628: (dbg) Run:  kubectl --context addons-786000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-786000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-786000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-arm64 -p addons-786000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.155997125s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-786000 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-tptmh" [777ac5c0-a212-4280-8fde-70bb6b06dc12] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-tptmh" [777ac5c0-a212-4280-8fde-70bb6b06dc12] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.00428125s
--- PASS: TestAddons/parallel/Headlamp (13.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-7882m" [051ce8a9-e70a-48ed-813b-efbc330f3328] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004343542s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-786000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (45.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-786000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-786000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-786000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [32996726-c165-401d-a333-2c1a15441d14] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [32996726-c165-401d-a333-2c1a15441d14] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [32996726-c165-401d-a333-2c1a15441d14] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.003718083s
addons_test.go:992: (dbg) Run:  kubectl --context addons-786000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 ssh "cat /opt/local-path-provisioner/pvc-4ccef5b2-ad9b-48ca-8732-5e654fd601b6_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-786000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-786000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-arm64 -p addons-786000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.326227833s)
--- PASS: TestAddons/parallel/LocalPath (45.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8hb6d" [f3358256-148d-4174-9d7b-e3934ecf6082] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003923166s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-786000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-r2glm" [c94e84a2-6991-4e8b-a05a-aaef66e58a18] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0038335s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (38.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:889: volcano-scheduler stabilized in 1.405ms
addons_test.go:905: volcano-controller stabilized in 1.435792ms
addons_test.go:897: volcano-admission stabilized in 2.417459ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-rrzh8" [f7da23dd-ba81-4bd8-a164-25289d50e2f2] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.004284959s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-qm9fc" [28bbed6a-812d-4d95-9083-1c42c70e0363] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.003863667s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-p47zh" [35d0de37-9b21-4702-99f6-d0f2841c4dc9] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003644459s
addons_test.go:924: (dbg) Run:  kubectl --context addons-786000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-786000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-786000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c9b83337-68c5-4ba8-9115-f7d0cfcbb14c] Pending
helpers_test.go:344: "test-job-nginx-0" [c9b83337-68c5-4ba8-9115-f7d0cfcbb14c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [c9b83337-68c5-4ba8-9115-f7d0cfcbb14c] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 14.004205375s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-arm64 -p addons-786000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-arm64 -p addons-786000 addons disable volcano --alsologtostderr -v=1: (9.636471917s)
--- PASS: TestAddons/parallel/Volcano (38.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-786000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-786000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-786000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-786000: (12.2068715s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-786000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-786000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-786000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.28s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.28s)

                                                
                                    
x
+
TestErrorSpam/setup (37.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-878000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-878000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 --driver=qemu2 : (37.69939275s)
--- PASS: TestErrorSpam/setup (37.70s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 pause
--- PASS: TestErrorSpam/pause (0.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (64.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 stop: (12.201526542s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 stop: (26.060590292s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-878000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-878000 stop: (26.030004916s)
--- PASS: TestErrorSpam/stop (64.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19302-1213/.minikube/files/etc/test/nested/copy/1712/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-020000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-020000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (50.401963958s)
--- PASS: TestFunctional/serial/StartWithProxy (50.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-020000 --alsologtostderr -v=8
E0718 20:33:59.683971    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:33:59.692578    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:33:59.703876    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:33:59.726133    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:33:59.766344    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:33:59.848454    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:34:00.009323    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:34:00.331527    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:34:00.973742    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:34:02.255874    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:34:04.818215    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:34:09.940278    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
E0718 20:34:20.182138    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-020000 --alsologtostderr -v=8: (35.315749208s)
functional_test.go:659: soft start took 35.316133292s for "functional-020000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-020000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1767892312/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 cache add minikube-local-cache-test:functional-020000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 cache delete minikube-local-cache-test:functional-020000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-020000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-020000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (64.913375ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 kubectl -- --context functional-020000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-020000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-020000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0718 20:34:40.664093    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-020000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.015564542s)
functional_test.go:757: restart took 37.015663541s for "functional-020000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-020000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3477499941/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-020000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-020000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-020000: exit status 115 (100.524292ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32224 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-020000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-020000 delete -f testdata/invalidsvc.yaml: (1.000213541s)
--- PASS: TestFunctional/serial/InvalidService (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-020000 config get cpus: exit status 14 (33.568042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-020000 config get cpus: exit status 14 (30.793583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-020000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-020000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4680: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-020000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-020000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.933667ms)

                                                
                                                
-- stdout --
	* [functional-020000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:35:57.528594    4663 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:35:57.528749    4663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:35:57.528753    4663 out.go:304] Setting ErrFile to fd 2...
	I0718 20:35:57.528755    4663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:35:57.528878    4663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:35:57.529923    4663 out.go:298] Setting JSON to false
	I0718 20:35:57.547594    4663 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2125,"bootTime":1721358032,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:35:57.547691    4663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:35:57.553843    4663 out.go:177] * [functional-020000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0718 20:35:57.560779    4663 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:35:57.560833    4663 notify.go:220] Checking for updates...
	I0718 20:35:57.568690    4663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:35:57.571628    4663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:35:57.574729    4663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:35:57.577772    4663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 20:35:57.580804    4663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:35:57.584071    4663 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:35:57.584333    4663 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:35:57.587778    4663 out.go:177] * Using the qemu2 driver based on existing profile
	I0718 20:35:57.594683    4663 start.go:297] selected driver: qemu2
	I0718 20:35:57.594689    4663 start.go:901] validating driver "qemu2" against &{Name:functional-020000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-020000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:35:57.594740    4663 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:35:57.601833    4663 out.go:177] 
	W0718 20:35:57.605737    4663 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0718 20:35:57.609736    4663 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-020000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-020000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-020000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.871459ms)

                                                
                                                
-- stdout --
	* [functional-020000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:35:57.751787    4674 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:35:57.751899    4674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:35:57.751903    4674 out.go:304] Setting ErrFile to fd 2...
	I0718 20:35:57.751905    4674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:35:57.752024    4674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
	I0718 20:35:57.753347    4674 out.go:298] Setting JSON to false
	I0718 20:35:57.770342    4674 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2125,"bootTime":1721358032,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0718 20:35:57.770454    4674 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:35:57.774763    4674 out.go:177] * [functional-020000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0718 20:35:57.783130    4674 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:35:57.783193    4674 notify.go:220] Checking for updates...
	I0718 20:35:57.790726    4674 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	I0718 20:35:57.793784    4674 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0718 20:35:57.796815    4674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:35:57.800741    4674 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	I0718 20:35:57.803814    4674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:35:57.807005    4674 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:35:57.807245    4674 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:35:57.811696    4674 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0718 20:35:57.818689    4674 start.go:297] selected driver: qemu2
	I0718 20:35:57.818694    4674 start.go:901] validating driver "qemu2" against &{Name:functional-020000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-020000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:35:57.818741    4674 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:35:57.824717    4674 out.go:177] 
	W0718 20:35:57.828777    4674 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0718 20:35:57.831815    4674 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [aee4a9a7-bd8f-4306-8e61-46447c7e4e12] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00333875s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-020000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-020000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-020000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-020000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [05630930-f59a-4fc0-8ab3-f64b61146876] Pending
helpers_test.go:344: "sp-pod" [05630930-f59a-4fc0-8ab3-f64b61146876] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [05630930-f59a-4fc0-8ab3-f64b61146876] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004278583s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-020000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-020000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-020000 delete -f testdata/storage-provisioner/pod.yaml: (1.05890325s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-020000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [18b5fec2-36dd-4bef-bee6-35458324bcdf] Pending
helpers_test.go:344: "sp-pod" [18b5fec2-36dd-4bef-bee6-35458324bcdf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [18b5fec2-36dd-4bef-bee6-35458324bcdf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003662583s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-020000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh -n functional-020000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 cp functional-020000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3924609929/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh -n functional-020000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh -n functional-020000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1712/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "sudo cat /etc/test/nested/copy/1712/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1712.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "sudo cat /etc/ssl/certs/1712.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1712.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "sudo cat /usr/share/ca-certificates/1712.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/17122.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "sudo cat /etc/ssl/certs/17122.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/17122.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "sudo cat /usr/share/ca-certificates/17122.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-020000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-020000 ssh "sudo systemctl is-active crio": exit status 1 (75.51825ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-020000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-020000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-020000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-020000 image ls --format short --alsologtostderr:
I0718 20:36:03.582900    4704 out.go:291] Setting OutFile to fd 1 ...
I0718 20:36:03.583095    4704 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:36:03.583100    4704 out.go:304] Setting ErrFile to fd 2...
I0718 20:36:03.583102    4704 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:36:03.583252    4704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
I0718 20:36:03.583665    4704 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:36:03.583733    4704 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:36:03.584598    4704 ssh_runner.go:195] Run: systemctl --version
I0718 20:36:03.584610    4704 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/functional-020000/id_rsa Username:docker}
I0718 20:36:03.607883    4704 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-020000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-020000 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-020000 | 77fe14cf2c32c | 30B    |
| docker.io/library/nginx                     | latest            | 443d199e8bfcc | 193MB  |
| docker.io/library/nginx                     | alpine            | 5461b18aaccf3 | 44.8MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-020000 image ls --format table --alsologtostderr:
I0718 20:36:03.811002    4710 out.go:291] Setting OutFile to fd 1 ...
I0718 20:36:03.811185    4710 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:36:03.811188    4710 out.go:304] Setting ErrFile to fd 2...
I0718 20:36:03.811190    4710 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:36:03.811310    4710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
I0718 20:36:03.811756    4710 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:36:03.811835    4710 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:36:03.812743    4710 ssh_runner.go:195] Run: systemctl --version
I0718 20:36:03.812757    4710 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/functional-020000/id_rsa Username:docker}
I0718 20:36:03.835500    4710 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-020000 image ls --format json --alsologtostderr:
[{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17",
"repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-020000"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"77fe14cf2c32c9864b5d2fd23cb9d34828cc301da51326def58a1e8fabc673f0","repoDigests":[],"repoTags"
:["docker.io/library/minikube-local-cache-test:functional-020000"],"size":"30"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-020000 image ls --format json --alsologtostderr:
I0718 20:36:03.728925    4708 out.go:291] Setting OutFile to fd 1 ...
I0718 20:36:03.729102    4708 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:36:03.729106    4708 out.go:304] Setting ErrFile to fd 2...
I0718 20:36:03.729108    4708 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:36:03.729258    4708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
I0718 20:36:03.729688    4708 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:36:03.729750    4708 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:36:03.730627    4708 ssh_runner.go:195] Run: systemctl --version
I0718 20:36:03.730638    4708 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/functional-020000/id_rsa Username:docker}
I0718 20:36:03.753981    4708 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-020000 image ls --format yaml --alsologtostderr:
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 77fe14cf2c32c9864b5d2fd23cb9d34828cc301da51326def58a1e8fabc673f0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-020000
size: "30"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-020000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-020000 image ls --format yaml --alsologtostderr:
I0718 20:36:03.654122    4706 out.go:291] Setting OutFile to fd 1 ...
I0718 20:36:03.654290    4706 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:36:03.654295    4706 out.go:304] Setting ErrFile to fd 2...
I0718 20:36:03.654297    4706 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:36:03.654435    4706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
I0718 20:36:03.654891    4706 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:36:03.654965    4706 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:36:03.655858    4706 ssh_runner.go:195] Run: systemctl --version
I0718 20:36:03.655867    4706 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/functional-020000/id_rsa Username:docker}
I0718 20:36:03.678666    4706 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-020000 ssh pgrep buildkitd: exit status 1 (55.566667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image build -t localhost/my-image:functional-020000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-020000 image build -t localhost/my-image:functional-020000 testdata/build --alsologtostderr: (1.590637833s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-020000 image build -t localhost/my-image:functional-020000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 561d4ba9ae35
---> Removed intermediate container 561d4ba9ae35
---> 59919551a68d
Step 3/3 : ADD content.txt /
---> c358d773091f
Successfully built c358d773091f
Successfully tagged localhost/my-image:functional-020000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-020000 image build -t localhost/my-image:functional-020000 testdata/build --alsologtostderr:
I0718 20:36:03.937721    4714 out.go:291] Setting OutFile to fd 1 ...
I0718 20:36:03.937988    4714 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:36:03.937996    4714 out.go:304] Setting ErrFile to fd 2...
I0718 20:36:03.937999    4714 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:36:03.938132    4714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1213/.minikube/bin
I0718 20:36:03.938623    4714 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:36:03.939374    4714 config.go:182] Loaded profile config "functional-020000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:36:03.940194    4714 ssh_runner.go:195] Run: systemctl --version
I0718 20:36:03.940202    4714 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1213/.minikube/machines/functional-020000/id_rsa Username:docker}
I0718 20:36:03.963406    4714 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4218354698.tar
I0718 20:36:03.963481    4714 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0718 20:36:03.967421    4714 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4218354698.tar
I0718 20:36:03.969159    4714 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4218354698.tar: stat -c "%s %y" /var/lib/minikube/build/build.4218354698.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4218354698.tar': No such file or directory
I0718 20:36:03.969175    4714 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4218354698.tar --> /var/lib/minikube/build/build.4218354698.tar (3072 bytes)
I0718 20:36:03.980564    4714 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4218354698
I0718 20:36:03.986015    4714 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4218354698 -xf /var/lib/minikube/build/build.4218354698.tar
I0718 20:36:03.990653    4714 docker.go:360] Building image: /var/lib/minikube/build/build.4218354698
I0718 20:36:03.990703    4714 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-020000 /var/lib/minikube/build/build.4218354698
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0718 20:36:05.487471    4714 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-020000 /var/lib/minikube/build/build.4218354698: (1.496790917s)
I0718 20:36:05.487549    4714 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4218354698
I0718 20:36:05.491294    4714 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4218354698.tar
I0718 20:36:05.494324    4714 build_images.go:217] Built localhost/my-image:functional-020000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4218354698.tar
I0718 20:36:05.494344    4714 build_images.go:133] succeeded building to: functional-020000
I0718 20:36:05.494348    4714 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image ls
2024/07/18 20:36:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.75483775s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-020000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-020000 docker-env) && out/minikube-darwin-arm64 status -p functional-020000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-020000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-020000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-020000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-6gxzl" [a811d80a-9191-4d98-b84e-cc77eddaa160] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-6gxzl" [a811d80a-9191-4d98-b84e-cc77eddaa160] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003742333s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image load --daemon docker.io/kicbase/echo-server:functional-020000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image load --daemon docker.io/kicbase/echo-server:functional-020000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-020000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image load --daemon docker.io/kicbase/echo-server:functional-020000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image save docker.io/kicbase/echo-server:functional-020000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image rm docker.io/kicbase/echo-server:functional-020000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-020000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 image save --daemon docker.io/kicbase/echo-server:functional-020000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-020000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-020000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-020000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-020000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-020000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4521: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-020000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-020000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [959a09cb-cd53-4843-8984-41643791efdf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [959a09cb-cd53-4843-8984-41643791efdf] Running
E0718 20:35:21.624679    1712 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1213/.minikube/profiles/addons-786000/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003797459s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 service list -o json
functional_test.go:1490: Took "78.875458ms" to run "out/minikube-darwin-arm64 -p functional-020000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30228
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30228
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-020000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.249.250 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-020000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "85.251875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.751833ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "81.112584ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.511416ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2758975190/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721360150538138000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2758975190/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721360150538138000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2758975190/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721360150538138000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2758975190/001/test-1721360150538138000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (61.129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 03:35 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 03:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 03:35 test-1721360150538138000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh cat /mount-9p/test-1721360150538138000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-020000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [766889b0-bff8-47cd-90be-8b9ead0c2674] Pending
helpers_test.go:344: "busybox-mount" [766889b0-bff8-47cd-90be-8b9ead0c2674] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [766889b0-bff8-47cd-90be-8b9ead0c2674] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [766889b0-bff8-47cd-90be-8b9ead0c2674] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.002451292s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-020000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2758975190/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1558104097/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.102584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1558104097/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-020000 ssh "sudo umount -f /mount-9p": exit status 1 (57.754375ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-020000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1558104097/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2093368901/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2093368901/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2093368901/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T" /mount1: exit status 1 (66.604875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T" /mount2: exit status 1 (54.816167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-020000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-020000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2093368901/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2093368901/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-020000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2093368901/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-020000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-020000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-020000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-256000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.667308417s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (76.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m16.96967875s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (76.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-615000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-615000 --output=json --user=testUser: (3.714600875s)
--- PASS: TestJSONOutput/stop/Command (3.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-796000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-796000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.819208ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"088471cd-1573-439e-a8fe-608b651da971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-796000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"705b4f11-1b4b-403e-8e51-19be2bde5210","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"2e55b262-8b74-46c5-8fef-48429f29a14b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig"}}
	{"specversion":"1.0","id":"af8a885a-2b98-4149-b42e-e990d6440d66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"977df17b-2d95-45bf-b11f-22eb82a7e814","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c5d54bc8-47b9-4b85-a45c-1d886460268e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube"}}
	{"specversion":"1.0","id":"8a9d86bb-acdb-4fc7-92fd-d44665e33f84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"50b85608-3d7a-48b9-9ad2-3861b1185d69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-796000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-796000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.757875ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-339000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1213/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1213/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-339000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-339000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.658209ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-339000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-339000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.59949125s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.626884292s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-339000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-339000: (3.191425s)
--- PASS: TestNoKubernetes/serial/Stop (3.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-339000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-339000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.598917ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-339000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-339000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-465000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-969000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-969000 --alsologtostderr -v=3: (3.67317575s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-969000 -n old-k8s-version-969000: exit status 7 (57.053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-969000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-436000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-436000 --alsologtostderr -v=3: (3.569757709s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (60.108083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-436000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-489000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-489000 --alsologtostderr -v=3: (3.236966958s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-489000 -n embed-certs-489000: exit status 7 (56.43825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-489000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-167000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-167000 --alsologtostderr -v=3: (2.06602725s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-167000 -n default-k8s-diff-port-167000: exit status 7 (53.351667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-167000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-701000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-701000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-701000 --alsologtostderr -v=3: (3.523518458s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-701000 -n newest-cni-701000: exit status 7 (55.244167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-701000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/275)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-736000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                
----------------------- debugLogs end: cilium-736000 [took: 2.160976459s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-736000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-736000
--- SKIP: TestNetworkPlugins/group/cilium (2.26s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-213000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-213000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard